• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Metrics

We Have Metrics. Now What?

May 11, 2025 by philc

7 min read

A Guide for Legacy-Minded Leaders on Using Metrics to Drive the Right Behavior

From Outputs to Outcomes

A senior executive recently asked a VP of Engineering and the Head of Architecture for industry benchmarks on Flow Metrics. At first, this seemed like a positive step, shifting the focus from individual output to team-level outcomes. However, the purpose of the request raised concerns. These benchmarks were intended to evaluate engineering managers’ performance for annual reviews and possibly bonuses.

That’s a problem. Using system-level metrics to judge individual performance is a common mistake. It might look good on paper but often hides deeper issues. This approach is for senior leaders adopting team-level metrics who want to use them effectively. You’ve chosen better metrics. Now, let’s make sure they work as intended. It risks turning system-level signals into personal scorecards, creating the dysfunction these metrics are meant to reveal and improve. Using metrics this way negates their value and invites gaming over genuine improvement.

To clarify, the executive’s team structure follows the Engineering Manager (EM) model, where EMs are responsible for the performance of the delivery team. In contrast, I support an alternative approach with autonomous teams built around team topologies. These teams include all the roles needed to deliver value, without a manager embedded in the team. These are two common but very different models of team structure and performance evaluation.

This isn’t the first time I’ve seen senior leaders misuse qualitative metrics, and it likely won’t be the last. So I asked myself: Now that more leaders have agreed to adopt the right metrics, do they know how to use them responsibly?

I will admit that I was frustrated to learn of this request, but the event inspired me to create a guide for leaders, especially those used to traditional, output-focused models who are new to Flow Metrics and team-level measurement. This article shares my approach to metrics, focusing on curiosity, care, and a learning mindset. It’s not a set of rules. You’ve already chosen team-aligned metrics, and now I’ll explain how we use them to drive improvement while avoiding the pitfalls of judgment or manipulation.

A Note on Industry Benchmarks

At the beginning of this post, the senior leader requested industry benchmarks, specifically for Flow Metrics. When benchmarks are treated as targets or internal scorecards, they can reduce transparency. Teams might focus on meeting the numbers instead of addressing challenges openly.

Benchmarks are helpful, but only when applied thoughtfully. They’re most effective at the portfolio or organizational level rather than as performance targets for individual teams. Teams differ significantly in architecture, complexity, support workload, and business focus. Comparing an infrastructure-heavy team to a greenfield product team isn’t practical or fair.

Use benchmarks to understand patterns, not to assign grades. Ask instead: “Is this team improving against their baseline? What’s helping or getting in the way?”

How to Use Team Metrics Without Breaking Trust or the System

1. Start by inviting teams into the process

  • Don’t tell them, “Flow Efficiency must go up 10%.”
  • Ask instead: “Here’s what the data shows. What’s behind this? What could we try?”

Why: Positive intent. Teams already want to improve. They’ll take ownership if you bring them into the process and give them time and space to act. Top-down mandates might push short-term results, but they usually kill long-term improvement.

2. Understand inputs vs. outputs

  • Output metrics (like Flow Time, PR throughput, or change failure rate) are results. You don’t control them directly.
  • Input metrics (like review turnaround time or number of unplanned interruptions) reflect behaviors teams can change.

Why: If you set targets on outputs, teams won’t know what to do. That’s when you get gaming or frustration. Input metrics give teams something they can improve. That’s how you get real system-level change.

I’ve been saying this for a while, and I like how Abi Noda and the DX team explain it: input vs. output metrics. It’s the same thing as leading vs. lagging indicators. Focus on what teams can influence, not just what you want to see improve.

3. Don’t turn metrics into targets

When a measure becomes a target, it stops being useful.

  • Don’t turn system health metrics into KPIs.
  • If people feel judged by a number, they’ll focus on making the number look good instead of fixing the system.

Why: You’ll get shallow progress, not real change. And you won’t know the difference because the data will look better. The cost? Lost trust, lower morale, and bad decisions.

4. Always add context

  • Depending on the situation, a 10-day Flow Time might be great or terrible.
  • Ask about the team’s product, the architecture, the kind of work they do, and how much unplanned work they handle.

Why: Numbers without context are misleading. They don’t tell the story. If you act on them without understanding what’s behind them, you’ll create the wrong incentives and fix the bad things.

5. Set targets the right way

  • Not every metric needs a goal.
  • Some should trend up; others should stay stable.
  • Don’t use blanket rules like “improve everything by 10%.”

Why: Metrics behave differently. Some take months to move. Others can be gamed easily. Think about what makes sense for that metric in that context. Real improvement takes time; chasing the wrong number can do more harm than good.

6. Tie metrics back to outcomes and the business

  • Don’t just say, “Flow Efficiency improved.” Ask, what changed?
    • Did we deliver faster?
    • Did we reduce the cost of delay?
    • Did we create customer value?

If you’ve read my other posts, I recommend tying every epic and initiative to an anticipated outcome. That mindset also applies to metrics. Don’t just look at the number. Ask what value it represents.

Also, it’s critical that teams use metrics to identify their bottlenecks. That’s the key. Real flow improvement comes from fixing the most significant constraint. If you’re improving something upstream or downstream of the bottleneck, you’re not improving flow. You’re just making things look better in one part of the system. It’s localized and often a wasted effort.

Why: If the goal is better business outcomes, you must connect what the team does with how it moves the needle. Metrics are just the starting point for that conversation.

7. Don’t track too many things

  • Stick to 3-5 input metrics at a time.
  • Make these part of retrospectives, not just leadership dashboards.

Why: Focus drives improvement. If everything is a priority, nothing is. Too many metrics dilute the team’s energy. Let them pick the right ones and go deep.

8. Build a feedback loop that works

  • Metrics are most useful when teams review them regularly.
  • Make time to reflect and adapt.

We’re still experimenting with what cadence works best. Right now, monthly retrospectives are the minimum. That gives teams short feedback loops to adjust their improvement efforts. A quarterly check-in is still helpful for zooming out. Both are valuable. We’re testing these cycles, but they give teams enough time to try, reflect, and adapt.

Why: Improvement requires learning. Dashboards don’t improve teams. Feedback does. Create a rhythm where teams can test ideas, measure progress, and shift direction.

A Word of Caution About Using Metrics for Performance Reviews

Some leaders ask, “Can I use Flow Metrics to evaluate my engineering managers?” You can, but it’s risky.

Flow Metrics tell you how the system is performing. They’re not designed to evaluate individuals. If you tie them to bonuses or promotions, you’ll likely get:

  • Teams gaming the data
  • Managers focus on optics, not problems
  • Reduced trust and openness

Why: When you make metrics part of a performance review, people stop using them for improvement. They stop learning. They play it safe. That hurts the team and the system.

Here’s what you can do instead:

In manager-led models, Engineering Managers are typically held accountable for team delivery. In cross-functional models, Agile Delivery Managers help guide improvement but don’t directly own delivery outcomes. In either case, someone helps the team improve.

That role should be evaluated, but not based on the raw numbers alone. Instead, assess how they supported improvement.

Thoughts on assessing “Guiding Team Improvement”:

Bottleneck Identification

  • Did they help surface and clarify constraints?
  • Are bottlenecks discussed and addressed

Team-Led Problem Solving

  • Did they enable experiments and reflection, not dictate fixes?

Use of Metrics for Insight, Not Pressure

  • Did they foster learning and transparency?

Facilitation of Improvement Over Time

  • Do the trends show intentional learning?

Cross-Team Alignment and Issue Escalation

  • Are they surfacing systemic issues beyond their team?

Focus on influence, not control. Assess those accountable to direct team performance improvements based on how they influence system improvements and support their teams.

  • Use metrics to guide coaching conversations, not to judge.
  • Evaluate managers based on how they improve the system and support their teams.
  • Reward experimentation, transparency, and alignment to business value.

Performance is bigger than one number. Metrics help tell the story, but they aren’t the story.

Sidebar: What if Gamification Still Improves the Metric?

I’ve heard some folks say, “I’m okay with gamification. If the number gets better, the team’s getting better.” That logic might work in the short term but breaks down over time. Here’s why:

  1. It often hides real issues.
  2. It focuses on optics instead of outcomes.
  3. It breaks feedback loops that drive learning.
  4. It leads to local, not systemic, improvement.

So, while gamification might improve the score, it doesn’t constantly improve the system and seldom as efficiently or sustainably.

If the goal is long-term performance, trust the process. Let teams learn from the data. Don’t let the number become the mission.

Metrics are just tools. If you treat them like a scoreboard, you’ll create fear. If you treat them like a flashlight, they’ll help you and your teams see what’s happening.

Don’t use metrics to judge individuals. Use them to guide conversations and, surface problems, and support improvement. That’s how you build trust and better systems.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

A Self-Guided Performance Assessment for Agile Delivery Teams

May 3, 2025 by philc

12 min read

This all started with a conversation and a question: “We do performance reviews for individuals, but what about teams?” If we care about how individuals perform, shouldn’t we also care about how teams perform together?

Why do we even work in teams?

It’s a strategic decision. In modern software delivery, teams are the core drivers of value. A strong team can achieve results far greater than what individuals can accomplish alone. How well we think and work together as a team (collective intelligence) is more impactful than the individual Performance of its members. That’s why improving team effectiveness is so important.

But what does team effectiveness enable?

  • Execution: High-performing teams work faster and better meet customer needs. They focus on the right priorities, adjust quickly, and recover more quickly when problems arise.
  • Engagement and Retention: People stay in workplaces where they feel their contributions matter, where they’re supported, and where they feel safe to share ideas. Strong teams build this kind of environment.
  • Sustainable Performance: Burnout occurs when individuals take on too much on their own. Strong teams share the workload, support one another, and collaborate to solve problems.

Many organizations still evaluate individuals on their own, individual performance assessments, overlooking their Performance within the team, their contributions, and the overall dynamics and health of the team.

So, let’s ask a better question: How well does your team work together?

  • What strengths and skills is the team using?
  • Which areas need more development or clarification?
  • How often does your team take time to review its Performance together?  
  • Do you have a system in place for gathering feedback and implementing ongoing improvements?

Just like individuals, even high-performing teams experience slumps or periods of lower Performance. Acknowledging this is the first step toward helping the team return to excellence.

This article provides a self-assessment tool to help teams evaluate their current working practices at a specific point in time. The goal isn’t to place blame or measure productivity but to spark open conversations and create clarity that leads to improvement. When teams get feedback on Performance and collaborate effectively, everything improves: delivery speed, developer satisfaction, and overall business impact.

A Reflection More Than a Framework

This isn’t a manager’s tool or a leadership scorecard. It’s a guide for teams looking to improve how they collaborate with purpose. It’s for delivery teams that value their habits just as much as their results.

Use it as a retro exercise. A quarterly reset. A mirror.

Why Team Reflection Matters

We already measure delivery performance. DORA. Flow. Developer Experience.
But those metrics don’t always answer:

  • Are we doing what we said mattered , like observability and test coverage?
  • Are we working as a team or as individuals executing in parallel?
  • Do we hold each other accountable for delivering with integrity?

This is the gap: how teams work together. This guide helps fill it , not to replace metrics, but to deepen the story they tell.

What This Is (And Isn’t)

You might ask: “Don’t SAFe, SPACE, DORA, or Flow Metrics already do this?”
Yes and no. Those frameworks are valuable. But they answer different questions:

  • DORA & Flow: How fast and stable is our delivery?
  • DX Core 4 & SPACE: How do developers feel about their work environment?
  • Maturity Models: How fully have we adopted Agile practices?
  • For organizations implementing SAFe, SAFe’s Measure and Grow evaluate enterprise agility in dimensions such as team agility, product delivery, and lean leadership.

What they don’t always show is:

  • Are we skipping discipline under pressure?
  • Do we collaborate across roles or operate in silos?
  • Are we shipping through red builds and hoping for the best?

But the question stuck with me:
Shouldn’t we do the same for teams if we hold individuals accountable for how they show up?

What follows is a framework and a conversation starter, not a mandate. It’s just something to consider because, in many organizations, teams are where the real impact (or dysfunction) lives.

Suggested Team Reflection Dimensions

You don’t need to use all twelve categories. Start with the ones that matter most to your team, or define your own. This section is designed to help teams reflect on how they work together, not just what they deliver.

But before diving into individual dimensions, start with this simple but powerful check-in.

Would We Consider Ourselves Underperforming, Performing, or High-Performing?

This question encourages self-awareness without any external judgment. The team should decide together: no scorecards, no leadership evaluations, just a shared reflection on your experience as a delivery team.

From there, explore:

  • What makes us feel that way?
    What behaviors, habits, or examples support our self-assessment?
  • What should we keep doing?
    What’s already working well that we want to protect or double down on?
  • What should we stop doing?
    What’s causing friction, waste, or misalignment?
  • What should we start doing?
    What’s missing that could improve how we operate?
  • Do we have the skills and knowledge needed to meet our work demands?

This discussion often surfaces more actionable insight than metrics alone. It grounds the assessment in the team’s shared experience and sets the tone for improvement, not judgment.

A Flexible Self-Evaluation Scorecard

While this isn’t designed as a top-down performance tool, teams can use it as a self-evaluation scorecard if they choose. The reflection tables that follow can help teams:

  • Identify where they align today: underperforming, performing, or high-performing.
  • Recognize the dimensions where they accelerate and where they have room to improve.
  • Prioritize the changes that will have the greatest impact on how they deliver.

No two teams will see the same patterns, and that’s the point. Use the guidance below not as a measurement of worth but as a compass to help your team navigate toward better outcomes together.

10-Dimension Agile Team Performance Assessment Framework

These dimensions serve as valuable tools for self-assessments, retrospectives, or leadership reviews, offering a framework to evaluate not just what teams deliver, but how effectively they perform.

  1. Execution & Ownership: Do we plan realistically, adapt when needed, and take shared responsibility for outcomes?
  2. Collaboration & Communication: Do we collaborate openly, communicate effectively, and stay aligned across roles?
  3. Flow & Efficiency: Is our work moving steadily through the system with minimal delays or waste?
  4. Code Quality & Engineering Practices: Do we apply consistent technical practices that support high-quality, sustainable code?
  5. Operational Readiness & Observability: Are we ready to monitor, support, and improve the solutions we deliver?
  6. Customer & Outcome Focus: Do we understand who we’re building for and how our work delivers real-world value?
  7. Role Clarity & Decision Making: Are roles well understood, and do we share decisions appropriately across the team?
  8. Capabilities & Growth: Do we have the skills to succeed, and are we growing individually and as a team?
  9. Data-Driven Improvement: Do we use metrics, retrospectives, and feedback to improve how we work?
  10. Business-Technical Integration: Do we balance delivery of business and customer value with investment in technical health?

These dimensions help teams focus not just on what they’re delivering but also on how their work contributes to long-term success.

Reflection Table

This sample table is a great way to start conversations. It works well for retrospectives, quarterly check-ins, or when something feels off. Each category includes a key question and signs that may indicate your team is facing challenges in that area. These can be used as a team survey as well.

Execution & Ownership
Reflection Prompts: Do we plan realistically and follow through on what we commit to? Are we updating estimates and plans as new information emerges? Do we raise blockers or risks early? Are we collectively responsible for outcomes?
Signs of Struggle: Missed or overly optimistic goals, reactive work, unclear priorities or progress, estimates are outdated or disconnected from reality, team blames others or avoids accountability when things go wrong.

Collaboration & Communication
Reflection Prompts: Do we communicate openly, show up for team events, and work well across roles? How do we share knowledge and maintain alignment?
Signs of Struggle: Silos, missed handoffs, unclear ownership, frequent miscommunication.

Flow & Efficiency
Reflection Prompts: How efficiently does work move through our system? Are we managing context switching, controlling work in progress, and minimizing delays or rework?
Signs of Struggle: Ignored bottlenecks, context switching, stale or stuck work.

Code Quality & Engineering Practices
Reflection Prompts: Do we value quality in every commit? Are testing, automation, and clean code part of our culture? Do we apply consistent practices to ensure high-quality, maintainable code?
Signs of Struggle: Bugs, manual processes, high rework, tech debt increasing.

Operational Readiness & Observability
Reflection Prompts: Can we detect, troubleshoot, and respond to issues quickly and confidently?
Signs of Struggle: No monitoring, poor alerting, users report issues before we know.

Customer & Outcome Focus
Reflection Prompts: Do we understand the “why” behind our work (the anticipated outcome)? Do we measure whether we’re delivering impact and not just features?
Signs of Struggle:
Misaligned features, lack of outcome tracking, limited feedback loops.

Role Clarity & Decision Making
Reflection Prompts: Are team roles clear to everyone on the team? Do we share decision-making across product, tech, and delivery?
Signs of Struggle: Conflicting priorities, top-down decision dominance, slow resolution.

Capabilities & Growth
Reflection Prompts: Do we have the right skills to succeed and time to improve them? Do we have the capabilities required to deliver work?
Signs of Struggle: Skill gaps, training needs ignored, dependence on specialists or other teams.

Data-Driven Improvement
Reflection Prompts: Do we use metrics, retrospectives, and feedback to improve how we work?
Signs of Struggle: Metrics ignored, retros lack follow-through, repetitive problems.

Accountability & Ownership
Reflection Prompts: Can we be counted on? Do we raise risks and take responsibility as a team? Do we take shared responsibility for our delivery and raise risks early?
Signs of Struggle: Missed deadlines, hidden blockers, avoidance of tough conversations.

Business-Technical Integration
Reflection Prompts: Are we balancing product delivery with long-term technical health and business needs?
Signs of Struggle: Short-term thinking, ignored tech debt, disconnected roadmap and architecture.

How this appears in table format:

DimensionReflection PromptsSigns of Struggle
1. Execution & OwnershipDo we plan realistically and follow through on what we commit to? Are we updating estimates and plans as new information emerges? Do we raise blockers or risks early? Are we collectively responsible for outcomes?Missed or overly optimistic goals, reactive work, unclear priorities or progress, estimates are outdated or disconnected from reality, team blames others or avoids accountability when things go wrong.
2. Collaboration & CommunicationDo we communicate openly, show up for team events, and work well across roles? How do we share knowledge and maintain alignment?Silos, missed handoffs, unclear ownership, frequent miscommunication.
3. Flow & EfficiencyHow efficiently does work move through our system? Are we managing context switching, controlling work in progress, and minimizing delays or rework?Ignored bottlenecks, context switching, stale or stuck work.
4. Code Quality & Engineering PracticesDo we value quality in every commit? Are testing, automation, and clean code part of our culture? Do we apply consistent practices to ensure high-quality, maintainable code?Bugs, manual processes, high rework, tech debt increasing.
5. Operational Readiness & ObservabilityCan we detect, troubleshoot, and respond to issues quickly and confidently?No monitoring, poor alerting, users report issues before we know.
6. Customer & Outcome FocusDo we understand the “why” behind our work (the anticipated outcome)? Do we measure whether we’re delivering impact and not just features?Misaligned features, lack of outcome tracking, limited feedback loops.
7. Role Clarity & Decision MakingAre roles clear? Do we share decision-making across product, tech, and delivery?Conflicting priorities, top-down decision dominance, slow resolution.
8. Capabilities & GrowthDo we have the right skills to succeed and time to improve them? Do we have the capabilities required to deliver work?Skill gaps, training needs ignored, dependence on specialists or other teams.
9. Data-Driven ImprovementDo we use metrics, retrospectives, and feedback to improve how we work?Metrics ignored, retros lack follow-through, repetitive problems.
10. Business-Technical IntegrationAre we balancing product delivery with long-term technical health and business needs?Short-term thinking, ignored tech debt, disconnected roadmap and architecture.

Detailed Assessment Reference

For teams looking for assessment levels, the next section breaks down each reflection category. It explains what “Not Meeting Expectations,” “Meeting Expectations,” and “Exceeding Expectations” look like in practice.

Execution & Ownership
Do we plan realistically, adapt when needed, and take shared responsibility for outcomes?

  • Not Meeting Expectations:
    No planning rhythm; commitments are missed; estimates are rarely updated; blockers are hidden.
  • Meeting Expectations:
    Team plans regularly, meets most commitments, revises estimates as needed, and raises blockers transparently.
  • Exceeding Expectations:
    Plans adapt with agility; estimates are realistic and actively managed; the team owns outcomes and proactively addresses risks.

Collaboration & Communication
Do we collaborate openly, communicate effectively, and stay aligned across roles?

  • Not Meeting Expectations: Works in silos; communication is inconsistent or unclear; knowledge isn’t shared. Team members are not attending meetings or conversations regularly.
  • Meeting Expectations: Team collaborates effectively and communicates openly across roles.
  • Exceeding Expectations: Team creates shared clarity, collaborates regularly, and actively drives alignment across all functions.

Flow & Efficiency
Is our work moving steadily through the system with minimal delays or waste?

  • Not Meeting Expectations: Work is consistently blocked or stuck; high WIP and frequent context switching slow delivery.
  • Meeting Expectations: Team manages WIP, removes blockers, and maintains steady delivery flow.
  • Exceeding Expectations: Team actively optimizes flow end-to-end; bottlenecks are identified and resolved.

Code Quality & Engineering Practices
Do we apply consistent technical practices that support high-quality, sustainable code?

  • Not Meeting Expectations: Defects are frequent; automation, testing, and refactoring are lacking.
  • Meeting Expectations: Defects are few or less frequent; code reviews and testing are standard; quality practices are regularly applied.
  • Exceeding Expectations: Quality is a shared team value; clean code, automation, and sustainable practices are embedded.

Operational Readiness & Observability
Are we ready to monitor, support, and improve the solutions we deliver?

  • Not Meeting Expectations: Monitoring is missing or insufficient; issues are discovered by users.
  • Meeting Expectations: Alerts and monitoring are in place; team learns from post-incident reviews.
  • Exceeding Expectations: Observability is proactive; issues are detected early and inform ongoing improvements.

Customer & Outcome Focus
Do we understand who we’re building for and how our work delivers real-world value?

  • Not Meeting Expectations: Work is disconnected from business goals; outcomes are not communicated or measured.
  • Meeting Expectations: Team understands customer or business impact and loosely ties delivery to anticipated outcomes and value.
  • Exceeding Expectations: Business or Customer impact drives planning and iteration; outcomes are tracked and acted upon.

Role Clarity & Decision Making
Are roles well understood, and do we share decisions appropriately across the team?

  • Not Meeting Expectations: Decision-making is top-down or unclear; prioritization is top-down; roles are overlapping or siloed.
  • Meeting Expectations: Team members understand their roles, prioritize, and make decisions collaboratively.
  • Exceeding Expectations: Teams co-own prioritization and decisions with transparency, clear tradeoffs, and joint accountability.

Capabilities & Growth
Do we have the skills to succeed, and are we growing individually and as a team?

  • Not Meeting Expectations: Skill gaps persist; team lacks growth opportunities or training support.
  • Meeting Expectations: The team has the right skills for current work and seeks help when needed.
  • Exceeding Expectations: Team proactively builds new capabilities, shares knowledge, and adapts to new challenges.

Data-Driven Improvement
Do we use metrics, retrospectives, and feedback to improve how we work?

  • Not Meeting Expectations: Feedback is anecdotal; metrics are not understood or ignored or unused in retrospectives.
  • Meeting Expectations: Team uses metrics and feedback to inform improvements regularly.
  • Exceeding Expectations: Metrics drive learning, experimentation, and meaningful change.

Business-Technical Integration
Do we balance delivery of business and customer value with investment in technical health?

  • Not Meeting Expectations: Technical health is ignored or sidelined in favor of speed and features.
  • Meeting Expectations: Product and engineering collaborate on both business value and technical needs.
  • Exceeding Expectations: Long-term technical health and business alignment are integrated into delivery decisions.

How this appears in table format:

10-Dimension Agile Team Performance Assessment Framework (3-Point Scale)

DimensionNot Meeting ExpectationsMeeting ExpectationsExceeding Expectations
1. Execution & OwnershipNo planning rhythm; missed commitments; outdated estimates; blockers hidden.Regular planning; estimates revised; blockers raised transparently.Plans adapt with agility; estimates are managed; team owns outcomes and addresses risks proactively.
2. Collaboration & CommunicationSiloed work; unclear communication; knowledge hoarded.Open, cross-role communication; knowledge shared.Team drives shared clarity and proactive alignment with others.
3. Flow & EfficiencyWork stalls; high WIP; frequent context switching.Steady flow; WIP managed; blockers removed.Flow optimized across the system; bottlenecks surfaced and resolved quickly.
4. Code Quality & EngineeringFrequent defects; minimal automation; unmanaged tech debt.Testing and reviews in place; debt tracked.Clean, sustainable code is a team norm; quality and automation prioritized.
5. Operational ReadinessMonitoring lacking; users detect issues.Monitoring and alerting in place; incident reviews occur.Team detects issues early; observability drives proactive improvement.
6. Customer & Outcome FocusLittle connection to business value or user needs.Team aware of goals; some outcome alignment.Delivery prioritized around customer value; outcomes measured and iterated on.
7. Role Clarity & Decision MakingRoles unclear; top-down decisions.Roles defined; collaborative decision-making.Shared decision ownership; tradeoffs transparent and understood.
8. Capabilities & GrowthSkill gaps ignored; no focus on development.Right skills in place; asks for help when needed.Team proactively grows skills; cross-training and adaptability are norms.
9. Data-Driven ImprovementMetrics ignored; retros repetitive or shallow.Data and feedback used in team improvement.Metrics and feedback drive learning and meaningful change.
10. Business-Technical IntegrationTechnical health neglected; short-term focused.Business and tech needs discussed and planned.Business outcomes and technical resilience are co-prioritized in delivery.

The assessment is meant to start conversations. Use it as a guide, not a strict scoring system, and revisit them as your team grows and changes. High-performing teams regularly reflect as part of their routine, not just occasionally.

How to Use This and Who Should Be Involved

This framework isn’t only a performance review. It’s a reflection tool designed for teams to assess themselves, clarify their goals, and identify areas for growth.

Here’s how to make it work:

1. Run It as a Team

Use this framework during retrospectives, quarterly check-ins, or after a major delivery milestone. Let the team lead the conversation. They’re closest to the work and best equipped to evaluate how things feel.

The goal isn’t to assign grades. It’s to pause, align, and ask: How are we doing?

2. Make It Yours

There’s no need to use all ten dimensions. Start with the ones that resonate most. You can rename them, add new ones, or redefine what “exceeding expectations” look like in your context.

The more it reflects your team’s values and language, the more powerful the reflection becomes.

3. Use Metrics to Support the Story, Not Replace It

Delivery data like DORA, Flow Metrics, or Developer Experience scores can add perspective. But they should inform, not replace the conversation. Numbers are helpful, but they don’t speak for how it feels to deliver work together. Let data enrich the dialogue, not dictate it.

4. Invite Broader Perspectives

Some teams can gather anonymous 360° feedback from stakeholders or adjacent teams surfacing blind spots and validate internal perceptions.

Agile Coaches or Delivery Leads can also bring an outside-in view, helping the team see patterns over time, connecting the dots across metrics and behaviors, and guiding deeper reflection. Their role isn’t to evaluate but to support growth.

5. Let the Team Decide Where They Stand

As part of the assessment, ask the team:
Would we consider ourselves underperforming, performing, or high-performing?Then explore:

  • What makes us feel that way?
  • What should we keep doing?
  • What should we stop doing?
  • What should we start doing?

These questions give the framework meaning. It turns observation into insight and insight into action.

This Is About Ownership, Not Oversight

This reflection guide and its 10 dimensions can serve as a performance management tool, but I strongly recommend using it as a check-in resource for teams. It’s designed to build trust, encourage honest conversations, and offer a clear snapshot of the team’s current state. When used intentionally, it enhances team cohesion and strengthens overall capability. For leaders, focusing on recurring themes rather than individual scores reveals valuable patterns that can inform coaching efforts rather than impose control. Adopting it is in your hands and your team’s.

Final Thoughts

This all started with a conversation and a question: “We do performance reviews for individuals, but what about teams?” If we care about how individuals perform, shouldn’t we also care about how teams perform together?

High-performing teams don’t happen by accident. They succeed by focusing on both what they deliver and how they deliver it.

High-performing teams don’t just meet deadlines, they adapt, assess themselves, and improve together. This framework provides them with a starting point to make that happen.

I’ll create a Google Form with these dimensions, using a 3-point Likert scale for our teams to fill out.


Related Articles

If you found this helpful, here are a few related articles that explore the thinking behind this framework:

  • From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics
  • Navigating the Digital Product Workflow Metrics Landscape: From DORA to Comprehensive Value Stream Management Platform Solutions

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

From Scrum Master to Agile Delivery Manager: Evolution in the Age of Flow

April 14, 2025 by philc

6 min read

This post was inspired by a LinkedIn post shared by Dave Westgarth.

In 2025, we formally changed the title of Scrum Master to Agile Delivery Manager (ADM) in our technology division. This renaming wasn’t a rebrand for the sake of optics. It reflected a deeper evolution already happening, rooted in the expanding scope of delivery leadership, the adoption of Flow Metrics and Value Stream Management, and our real-world shift from strict Scrum toward a more customized Kanban-based model.

It was this year that the name finally clicked. After assigning Value Stream Architect responsibilities to our Scrum Masters and giving them ownership of delivery metrics, team-level delivery health, and collaboration across roles within their Agile team, I realized the title “Scrum Master” no longer fit their role. I even considered Agile Value Stream Manager, but it felt too narrow and platform-specific.

That’s when Agile Delivery Manager stood out, not only as a better label but also as a more accurate reflection of the mindset and mission.

I’m not alone in this. My wife, a Scrum Master, noticed a rise in Agile Delivery Manager roles. These roles are emerging as a natural evolution of the Scrum Master role, broader in scope but still grounded in servant leadership and Agile values. This shift is becoming more common across industries.

Why We Made the Change

This wasn’t an overnight decision—it was the culmination of years of observing the gap between traditional agile roles and modern delivery demands. I’ve written extensively about the evolving nature of delivery roles in the modern product and engineering ecosystem. In “Navigating the Digital Product Workflow Metrics Landscape,” I highlighted how organizations that have matured beyond Agile 101 practices shift their attention upstream toward value creation, flow efficiency, and business impact.

In that article, I shared:

“Organizations that have invested in high automation, eliminated waste, and accelerated CI/CD cycles are now shifting left—seeking broader visibility from idea to operation.”
– Navigating the Digital Product Workflow Metrics Landscape

Similarly, in “Dependencies Are Here to Stay,” I discussed why frameworks couldn’t box delivery leadership in:

“We can’t measure agility in isolation. Dependencies are part of the system, not a failure of it. Leadership roles must evolve to manage flow across those dependencies, not just within a team board.”

This evolution is what our former Scrum Masters were doing. They were coaching teams and guiding delivery conversations, navigating delivery risks, managing stakeholder expectations, and tracking systemic flow. The title needed to grow with the responsibility.

The Agile Role That Connects It All

Agile leadership roles and responsibilities vary across organizations. Some have Scrum Masters or Agile Leaders, while others use titles like Technical Project Manager or Agile Coach. In some cases, responsibilities shift to Engineering or Product Managers, and some companies distribute these duties among team members and eliminate the role entirely. Despite these differences, we believe a dedicated Agile leadership position is valuable. This role plays a key part in improving team performance, delivery efficiency, and optimizing workflows.


The Agile Delivery Manager role is unique in that it is the only role on the team not incentivized by a specific type of work.

  • Product Managers focus on growth and prioritize new features.
  • Technical Leads concentrate on architecture and managing technical debt.
  • Information Security leaders work to reduce security risks.
  • QA teams ensure defects are identified and fixed.

The Agile Delivery Manager operates at a higher level, overseeing workflow across the distribution of work types, including features, technical debt, risks, and defects. It fosters continuous team improvement while ensuring that deliveries consistently drive tangible business value.

Inside the Agile Delivery Manager Role

It’s worth clarifying: In our model, Agile Delivery Managers remain focused on their assigned Agile team or teams. While the title may sound broader, the role is not intended to operate across multiple delivery teams or coordinate program-level work. Instead, ADMs guide and improve the delivery flow within their own team context—coaching the team, optimizing its workflow, and partnering with product and engineering to ensure value is delivered efficiently.

Here’s how we now define the Agile Delivery Manager in our updated job description:

“As an Agile Delivery Manager, you’ll lead strategic transformation, champion Flow Metrics and VSM, and shape how teams deliver real business value.”

Key responsibilities include:

  • Agile Leadership & Flow-Based Delivery
    Coaching teams while enabling clarity, cadence, and sustainability in customized Kanban-style systems.
  • Team Collaboration & Dependency Management
    Collaborating with Product, QA, InfoSec, and Engineering roles within the team to resolve blockers, ensure quality, and maintain delivery flow.
  • Flow Metrics & Value Stream Optimization
    Leading metric reviews using Flow Time, Load, Efficiency, and Distribution to drive better delivery outcomes.
  • Value Stream Architecture
    Acting as system-level delivery architects, not of code, but of how work flows from concept to value.
  • Strategic Reporting & Outcome Alignment
    Building quarterly delivery reports that tie execution to business value, supporting leadership visibility and continuous improvement.

This role no longer fits the narrow scope that Scrum once offered. It combines delivery leadership, agile stewardship, and flow optimization.

What This Means for Scrum Masters

If you’re a Scrum Master wondering what’s next, you’re not alone. You’re likely doing many things, but this role demands time to widen the lens.

As Dave Westgarth shared on LinkedIn:

“You’re using the same core competencies: facilitation, servant leadership, coaching, and team empowerment. They just get applied at different levels and from different perspectives.”

This evolution isn’t about abandoning Agile. It’s about scaling its intent.

Many of our ADM team members still value their strong Scrum foundation. However, they’ve broadened their focus to improve delivery efficiency, enhance team coordination, manage delivery risks, and ensure smooth team workflows across competing work types and stakeholder needs.

If you’re already guiding delivery beyond team ceremonies, influencing system flow, and navigating complexity, this evolution is your next chapter.

Final Thoughts

The shift to an Agile Delivery Manager reflects a modern reality: frameworks alone don’t scale agility; people do. The ADM role honors the coaching mindset of the Scrum Master while embracing the delivery complexities of today’s hybrid, platform-heavy, and outcome-driven organizations.

For our division, the name change signaled to our teams and business stakeholders that delivery leadership had evolved. More importantly, it gave our people permission to grow into that evolution.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable

April 12, 2025 by philc

9 min read

Connect the dots: Show how engineering efforts drive business impact by linking their work to key organization metrics and outcomes. Highlight their value and contribution to overall success.

When Sarcasm Reveals Misalignment

Last week, one of my Agile Delivery Leaders brought forward a concern from her team that spoke volumes, not just about an individual, but about the kind of tension that quietly persists in even the most mature organizations.

She asked her team to define the expected outcome for a new Jira Epic, a practice I’ve asked all teams to adopt to ensure software investments align with business goals. However, it seems they struggled to identify the anticipated outcome. On top of that, a senior team member who’s been part of our transformation for years dismissed the idea instead of contributing to the discussion. She found herself in a difficult position, torn between the leader’s authority and her own responsibilities. He commented something like:

“Why are we doing this? This is stupid. Phil read another book, and suddenly we’re all expected to jump on board.”

When I first heard that comment secondhand, I felt a wave of anger; it struck me as pure arrogance. This leader chose not to share his perspective with me directly, perhaps for reasons he deemed valid. But as I thought more about it, I realized it wasn’t arrogance at all, but ignorance. Not malicious ignorance, but the kind that comes from discomfort, uncertainty, or an unwillingness to admit they no longer understand or align with where things are going. Comments like that are often defense mechanisms. They mask deeper resistance, reveal a lack of clarity, or quietly question whether someone still fits into a system evolving beyond their comfort zone.

This wasn’t about rejecting change or progress; it was about pushing back against how we’re evolving. Moments like this remind us that true transformation isn’t just about forging ahead; it’s about fostering belief and alignment in mindset and actions as we move forward.

Purpose-Driven Development: My Approach to Sustainable Alignment

I asked teams to define anticipated outcomes not to add overhead but to protect the integrity of the way we build software.

Over the past decade, I’ve worked hard to lead engineering our teams and organization out of the “feature factory” trap, where the focus is on output volume, velocity, and shipping for the sake of shipping. Through that experience, I developed Purpose-Driven Development (PDD), my definition of this term.

Purpose-driven development might sound like a buzzword, but it’s how we bring Agile and Lean principles to life. It ensures delivery teams aren’t just writing code; they’re solving the right problems for the right reasons with clear goals and intentions.

PDD is built on one core idea: every initiative, epic, and sprint should be based on a clear understanding of why it matters.

Anticipated Outcomes: A Small Practice That Changes Everything

To embed this philosophy into our day-to-day work, we introduced a simple yet powerful practice:

Every Epic or Initiative must include an “Anticipated Outcome.”

Just a sentence or two that answers:

  • What are we hoping to achieve by doing this work?
  • How will it impact the customer, the Business, or the platform?

We don’t expect perfection. We expect intention. The goal isn’t to guarantee results but to anchor the work in a hypothesis that can be revisited, challenged, or learned from.

This simple shift creates:

  • Greater alignment between teams and strategy
  • More meaningful prioritization
  • Opportunities to reflect on outcomes, not just outputs
  • Visibility across leadership into what we’re investing in

Who Might Push Back and Why That’s Okay

When we ask teams to define anticipated outcomes, it’s not about creating friction; it’s about creating focus. And this shouldn’t feel like a burden to most of the team.

I believe engineers will welcome it. Whether they realize it at first or not, this clarity gives them purpose. It ties their daily work to something that matters beyond code.

The only two roles I truly expect might feel frustration when asked to define anticipated outcomes are:

Product Managers and Technical Leaders.

And even that frustration? It’s understandable.

Product Managers often experience pain from not being involved early enough in the ideation or problem-definition stage. They may not know the anticipated outcome if they’re handed priorities from a higher-level product team without the context or autonomy to shape the solution. And that’s the problem, not the question itself, but the absence of trust and inclusion upstream.

For Technical Leaders, it often comes when advocating for tech debt work. They know the system needs investment but struggle to translate that into a clear business reason. I get it; it’s frustrating when you know the consequences of letting entropy creep in, but you haven’t been taught to describe that impact in terms of business value, customer experience, or system performance.

But that’s exactly why this practice matters.

Asking for an anticipated outcome isn’t a punishment. It’s an exercise in alignment and clarity. And if that exercise surfaces frustration, that’s not failure. It’s the first step toward better decision-making and stronger cross-functional trust.

Whether it’s advocating for feature delivery or tech sustainability, we can’t afford to work in a vacuum. Every initiative, whether shiny and new or buried in system debt, must have a reason and a result we’re aiming for.

Anticipated Outcomes First, But OKR Alignment Is the Future

When I introduced the practice of documenting anticipated outcomes in every Epic or Initiative, I also asked for something more ambitious: a new field in our templates to capture the parent OKR or Key Result driving the work.

The goal was simple but powerful:

If we claim to be an outcome-driven organization, we should know what outcome we’re aiming for and where it fits in our broader strategy.

I aimed to help teams recognize that their Initiatives or Epics could serve as team-level Key Results directly tied to overarching business objectives. After all, this work doesn’t appear by chance. It’s being prioritized by Product, Operations, or the broader Business for a deliberate purpose: to drive progress and advance the company’s goals.

But when I brought this to our Agile leadership group, the response was clear: this was too much to push simultaneously.

Some teams didn’t know the parent KR, and some initiatives weren’t tied to a clearly articulated OKR. Our organizational OKR structure was often incomplete, and we were missing the connective tissue between top-level objectives and team-level execution.

And they were right.

We’re still maturing in how we connect strategy to delivery. For many teams, asking for the anticipated outcome and the parent OKR at once felt like a burden, not a bridge.

So, we paused the push for now. My focus remains first on helping teams articulate the anticipated outcome. That alone is a leap forward. As we strengthen that muscle, I’ll help connect the dots upward, mapping team efforts to the business outcomes they drive, even if we don’t have the complete OKR infrastructure yet.

Alignment starts with clarity. And right now, clarity begins with purpose.

Without an anticipated outcome, every initiative is a dart thrown in the dark.

It might land somewhere useful or waste weeks of productivity on something that doesn’t matter.

Documenting the outcome gives us clarity and direction. It means we’re making strategic moves, not random ones. And it reduces the risk of high-output teams being incredibly productive… at the wrong thing.

Introducing the Feature Factory Ratio

To strengthen our focus on PDD and prioritize outcomes over outputs, we are introducing a new core insights metric as part of our internal diagnostics:

Feature Factory Ratio (FFR) =

(Number of Initiatives or Epics without Anticipated Outcomes / Total Number of Initiatives or Epics) × 100

The higher the ratio, the greater the risk of operating like a feature factory, moving fast but potentially delivering little that matters.

The lower the ratio, the more confident we can be that our teams are connecting their work to value.

This ratio isn’t about micromanagement, it’s about organizational awareness. It tells us where alignment is breaking down and where we may need to revisit how we communicate the “why” behind our work.

Why We Call It the Feature Factory Ratio

When I introduced this metric, I considered several other names:

  • Outcome Alignment Ratio – Clear and descriptive, but lacking urgency
  • Clarity of Purpose Index – Insightful, but a bit abstract
  • Value Connection Metric – Emphasizes intent, but sounds like another analytics KPI

Each option framed the idea well, but they didn’t hit the nerve I wanted to expose.

Ultimately, I chose the Feature Factory Ratio because it speaks directly to the cultural pattern we’re trying to break.

It’s provocative by design. It challenges teams and leaders to ask, “Are we doing valuable work or just shipping features?” It turns an abstract concept into a visible metric and surfaces conversations we must have when our delivery drifts from our strategy.

Sometimes, naming things with impact helps us lead the behavior change that softer language can’t.

Sidebar: Superficial Alignment, The Silent Threat

One of the biggest leadership challenges in digital transformation isn’t open resistance, it’s superficial alignment.

These senior leaders attend the workshops, adopt the lingo, and show up to the town halls, but when asked to change how they work or lead, they bristle. They revert. They roll their eyes or make sarcastic comments.

But they’re really saying: I’m not sure I believe in this, or I don’t know how I fit anymore.

The danger is: superficial alignment looks like progress, but it blocks true transformation. It creates cultural drag. It confuses teams and weakens momentum.

Moments like the one I shared remind me that transformation isn’t a checkbox but a leadership posture. And sometimes, those sarcastic comments? They’re your clearest sign of where real work still needs to happen.

Start Where You Are and Grow from There

We’re all at different points in our transformation journeys as individuals, teams, and organizations.

So, instead of reacting with frustration when someone can’t articulate an outcome or when a snide remark surfaces resistance, use it as a signal.

Meet your team where they are. Use every gap as a learning opportunity, not a leadership failure.

If a team can’t answer “What’s the anticipated outcome?” today, help them start asking it anyway. The point isn’t to have every answer right now. It’s to build the muscle so that someday, we will.

These questions aren’t meant to judge where we are. They’re meant to guide us toward where we’re trying to go, and this is the Work of Modern Software Leadership.

It’s easy to say we want to be outcome-driven. Embedding that belief into daily practice is harder, especially when senior voices or legacy habits push back.

But this is the work:

  • Aligning delivery to strategy
  • Teaching teams to think in terms of impact
  • Holding the line on purpose—even when it’s uncomfortable
  • Measuring not just what we ship but why we’re shipping it

Yes, I’ve read my fair share of books. Along the way, I’ve experienced key moments and expected outcomes that influenced my journey in adopting new initiatives within our division and organization, such as Value Stream Management and understanding what it means to deliver real value. I’ve led teams through transformation and seen what works. From my experience in our organization and working with other industry leaders, I’ve learned that software delivery with a clear purpose is more effective, empowering, and valuable for the Business, our customers, and the teams doing the work.


Leader’s Checklist: Outcome Alignment in Agile Teams

Use this checklist to guide your teams and yourself toward delivering work that matters.

1. Intent Before Execution

  • Is every Epic or Initiative anchored with a clear Anticipated Outcome?
  • Have we stated why this work matters to the customer, business, or platform?
  • Are we avoiding the trap of “just delivering features” without a defined end state?

2. Strategic Connection

  • Can this work be informally or explicitly tied to a higher-level Key Result, business goal, or product metric?
  • Are we comfortable asking, “What is the business driver behind this work?” even if it’s not written down yet?

3. Team-Level Awareness

  • Do developers, QA, and designers understand the purpose behind what they’re building?
  • Can the team articulate what success looks like beyond “we delivered it”?

4. Product Owner Empowerment

  • Has the Product Manager or Product Owner been involved in problem framing, or were they handed a solution from above?
  • Is that a signal of upstream misalignment if they seem disconnected from the outcome?

5. Tech Debt with Purpose

  • If the work is tech debt, have we articulated its impact on system reliability, scalability, or risk?
  • Can we tie this work back to customer experience, transaction volume, or long-term business performance?

6. Measurement & Reflection

  • Are we tracking how many Initiatives or Epics lack anticipated outcomes using the Feature Factory Ratio?
  • Do we ever reflect on anticipated vs. actual outcomes once work is delivered?

7. Cultural Leadership

  • Are we reinforcing that asking, “What’s the anticipated outcome?” is about focus, not control?
  • When we face resistance or discomfort, are we leading with curiosity instead of compliance?

Remember:

Clarity is a leadership responsibility.

If your teams don’t know why they’re doing the work, the real problem is upstream, not them.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Flow Retreat 2025: Practicing the Work Behind the Work

March 29, 2025 by philc

4 min read

The Flow Leadership Retreat was the vision of Steve Pereira, co-author of the recently released book Flow Engineering: From Value Stream Mapping to Effective Action, and Kristen Haennel, his partner in building communities rooted in learning, collaboration, and systems thinking. But this wasn’t a typical professional gathering. Rather than a conference packed with sessions and slides, they created an immersive experience designed to bring together professionals from diverse industries to step back, reflect, and practice what it truly means to improve the flow of work.

The setting, against the remote and stunning oceanfront of the Yucatán Peninsula, wasn’t just beautiful; it was intentional. Free from the usual distractions, it created space for focused thinking, deeper conversations, and clarity that rarely emerges in day-to-day operations.

When I joined this first-ever Flow Leadership Retreat in March 2025, I expected thoughtful discussions on delivery systems, value streams, and flow. What I didn’t expect was how much the environment, the people, and the open space to think differently would shift my entire perspective on how work works.

As someone who’s spent the last 4 years advocating for Value Stream Management (VSM) and building systems that improve visibility and flow, I came into the retreat hoping to sharpen those tools. I left with refined perspectives and a renewed appreciation for the power of stepping away from execution to examine the system itself.

Flow Before Framework

On Day 1, we didn’t jump straight into diagrams or frameworks. Instead, we challenged ourselves to define what flow really means, individually and collectively. Some participants reached for physics and nature metaphors; others spoke about momentum, energy, or alignment.

And that was the point.

We explored flow not just as a metric but also as a state of system performance, psychological readiness, and sometimes a barrier caused by misalignment between intention and execution.

We examined constraints, those visible and invisible forces that slow work down. We also examined interpersonal and systemic friction as a root cause of waste and a signal for improvement.

The Power of Shared Experience

Day 2 brought stories. Coaches, consultants, and enterprise leaders shared what it’s like to bring flow practices into environments shaped by legacy processes, functional silos, and outdated metrics.

We didn’t just talk about practices. We compared scars. We discussed what happens when flow improvements stall, how leadership inertia manifests, and why psychological safety is essential to sustain improvement.

The value wasn’t in finding a single answer but in hearing how others had wrestled with the same questions from different perspectives. We found resonance in our challenges and, more importantly, in our commitment to change.

Mapping the System: Day 3 and the Five Maps

It wasn’t until Day 3 that we thoroughly walked through the Five Flow Engineering Maps. By then, we had laid the foundation through shared language and intent. The maps weren’t theoretical. They became immediate tools for diagnosing where our systems break down.

Here’s how we practiced:

  • Outcome Mapping helped us clarify what improvement meant and what we are trying to change in the system.
  • Current State Mapping exposes how work flows through the system, where it waits, and why it doesn’t arrive where or when we expect it.
  • Dependency Mapping surfaced the invisible contracts between teams, the blockers that live upstream and downstream of us.
  • Constraint Mapping allowed us to dig deeper into patterns, policies, and structures that prevent meaningful flow.
  • Flow Roadmapping helped us prioritize where to start, what to address next, and how to keep system improvement from becoming another unmeasured initiative.

We didn’t just learn to see the system. We refined our skills by applying real-world case examples to improve them.

An Environment That Made Learning Flow

The villa, tucked away on the Yucatán coast, offered more than scenery. It offered permission to slow down, think, walk away from laptops, and walk into reflection. It gave us the space to surface ideas and hold them up to the breeze as some of our Post-it notes blew away.

That environment became part of the learning. It reminded us that improving flow isn’t just about the process. It’s also about the conditions for thinking, collaborating, and creating clarity.

Final Reflections

This retreat wasn’t about doing more work. It focused on collaboration from different perspectives and experiences, understanding how work flows through our systems, and finding ways to improve it that are sustainable, practical, and measurable.

It reaffirmed something I’ve long believed:

We fix broken or inefficient systems, unlocking the full potential of our people, our products, and our performance.

I left with more than frameworks. I left with conversations I’ll be thinking about for months, new ways to approach problems I thought I understood, and the clarity that comes only when you step outside the system to study it fully.

I’m grateful for the experience and energized for what’s next.

References

  1. Pereira, S. & Davis, A. (2024). Flow Engineering: From Value Stream Mapping to Effective Action. IT Revolution Press.

Filed Under: Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Why Cutting Agile Leadership Hurts Teams More Than It Saves

March 21, 2025 by philc

8 min read

The Cost-Driven Decision That Many Companies Regret

Many organizations today are eliminating the Scrum Master or equivalent Agile leadership role, not to rebrand it but to cut costs. Instead of keeping Agile leadership as a dedicated role, they distribute its responsibilities across existing team members:

  • Engineering Managers take on Agile execution and delivery oversight.
  • Product Managers absorb backlog management, facilitation, and team coordination.
  • Team members self-manage Agile ceremonies, tracking, and reporting.

At first glance, this is a logical cost-saving move. Mature teams should be able to self-organize. But the reality is far more complicated.

In our company, we’ve seen firsthand that keeping Agile leadership as a distinct role pays off significantly more than the salary it costs.

Why We Didn’t Eliminate This Role

Like many organizations, we’ve gone through multiple Agile transformations:

  • Waterfall to WaterScrumFall – Agile sprints, but still project-driven release cycles
  • Scrum to Kanban & Flow – Shifting toward continuous delivery and flow efficiency
  • Scrum Master to Agile Leader to Agile Delivery Manager – Evolving the role encompassing Flow Metrics, Value Stream Management (VSM), Flow Engineering, and continuous optimization.

Rather than eliminate the role, we adapted it to better match how our teams and technology operate today.

We’ve moved away from Scrum and now use a Kanban flow with a few Scrum ceremonies mixed in. As a result, we changed the role name from “Agile Leader” since neither “Agile Coach” nor “Scrum Master” fit the way we work or the responsibilities of the role.

Meanwhile, our parent company, which structured product teams differently, removed the Scrum Master and QA roles, pushing those responsibilities onto Product and Engineering Managers. This team design isn’t inherently wrong, but it does fundamentally change team dynamics and, in our experience, weakens long-term effectiveness.

Our support for this role deepened when we began adopting Value Stream Management in 2020 and 2021. As we learned more about optimizing the flow of work across the system, aligning delivery to OKRs and business outcomes, and using Flow Metrics to identify bottlenecks, we made a key decision: rather than hire a value stream architect or separate value delivery lead; we assigned that accountability to the Agile Leader. That move became a turning point. The role now included facilitation, value stream management, flow engineering, and cross-system delivery health. This expansion of responsibility led us to retitle the role of Agile Delivery Manager.

The Hidden Cost of Eliminating Agile Leadership

Many companies assume Agile execution can take care of itself. But what happens?

  • Engineering Managers are already stretched managing technical leadership, hiring, mentoring, and architecture. Adding Agile execution oversight creates competing priorities.
  • Product Managers are tasked with strategy, roadmap, and customer insight. When they absorb Agile execution, their ability to drive innovation and product-market fit suffers.
  • Teams default to feature-first work without someone to balance priorities across features, tech debt, security, and defects.
  • The erosion of Agile leadership often leads to a breakdown in psychological safety, team culture, and continuous improvement. Agile leaders aren’t just facilitators but team enablers who cultivate trust, alignment, and growth.

The impact of cutting or removing the dedicated agile leader role from teams isn’t a theoretical concern. We’ve seen organizations eliminate the role and reinstate it later after delivery slowed, burnout spiked, and alignment broke down.

What Happens When Agile Leadership Is Removed?

When Agile leadership is absorbed rather than owned, teams face:

1. Increased Cognitive Load for Engineering & Product Managers

  • Engineering Managers are expected to facilitate Agile ceremonies, track team health, and optimize delivery on top of leading architecture and engineering excellence.
  • Product Managers now manage the backlog, facilitate delivery, and maintain customer alignment all at once.

2. Reduced Flow Efficiency & Team Alignment

  • Work is optimized for speed over value, with more features and fewer strategic investments in quality, sustainability, or security.
  • No one is clearly accountable for balancing work types across the system.

3. Breakdown in Agile Practices, Psychological Safety & Team Culture

  • Retrospectives lose impact without consistent facilitation.
  • Process improvements stall without clear ownership.
  • Team culture and psychological safety erode, affecting engagement, retention, and long-term execution health.

The Agile Delivery Manager: More Than a Facilitator

As our practices evolved, the role titles and responsibilities changed as well. 2025, we’re updating it from Agile Leader to Agile Delivery Manager. The Agile Delivery Manager (ADM) is more than just a renamed Scrum Master; it’s an evolved form of Agile leadership designed to ensure:

  • Agile leadership is a focused role, not something to divide among the team  
  • Flow Metrics and Value Stream Management (VSM) help improve overall system delivery  
  • Teams prioritize both feature development and maintaining system health, technical debt, security, and fixing defects  
  • Psychological safety, collaboration, and a strong culture are actively maintained

Unlike Product Managers (incentivized to deliver features) or Engineering Managers (focused on technical excellence and delivery), the ADM has no stake in any single type of work. This neutrality is essential. They provide a holistic, unbiased lens on the system, ensuring balanced Flow Distribution and healthy delivery over time. Without this role, teams prioritize visible work and short-term wins, neglecting foundational needs.

What the Experts Say About Scrum Masters and Why It Still Matters

Some well-known Agile voices have described the Scrum Master as a servant-leader, facilitator, and invisible guide:

“Great Scrum Masters don’t manage the team; they enable the team to manage themselves.” – Gunther Verheyen, author of Scrum – A Pocket Guide

“A good Scrum Master is invisible. A great Scrum Master makes the team feel like they did it themselves.” – Geoff Watts, Agile coach and author

“The role of the Scrum Master is not to ensure Scrum is implemented correctly. It’s to ensure that the team continuously improves and delivers value.” – Scrum.org Blog

“Without a dedicated Scrum Master, teams often fall back into old habits, status reporting, command-and-control, and short-term delivery over long-term health.” – Agile coach insight, echoed across retrospectives and forums

These quotes reflect the foundational role the Scrum Master plays in enabling self-managing teams, continuous improvement, and long-term value delivery.

However, the role must evolve as the team matures. When teams move beyond needing constant facilitation, the Agile leader doesn’t become unnecessary; they become more strategic. They step into a broader role: optimizing flow, supporting cross-functional alignment, stewarding system health, and driving outcome-based delivery.

Rather than disappearing, the Agile leader becomes even more critical, not as a passive servant but as a system-level enabler of delivery efficiency and value.

Lessons from Inside: Comparing Team Models

I’m fortunate to work in an organization that supports both models: teams with dedicated Agile Delivery Managers and teams with responsibilities assigned to Engineering Managers. This side-by-side comparison has been revealing. Engineering managers in teams without an ADM often struggle to juggle architectural leadership, people management, Agile ceremonies, psychological safety coaching, and flow metrics. The burden is real, and it dilutes their impact across all fronts. What gets lost is not just ceremony facilitation but sustained attention to team health, value delivery, and process evolution. These teams tend to operate reactively without a clear guide focusing on system optimization.

That said, I also recognize that some long-lived, high-performing teams have matured to the point where they can self-manage without formal Agile leadership. These teams have developed strong cultures, embedded trust, and deep internal accountability. In those environments, the absence of a dedicated ADM may not be felt day-to-day.

However, this raises an important question: Who is responsible for reporting on delivery health, aligning with outcomes, and guiding continuous optimization across the system? That’s not a critique; it’s just something worth considering.

Different Models, Different Choices

To be clear, I’m not saying one model is right and the other is wrong. I’m sharing what I’ve seen work and where things fall apart.

Different organizations, maturity levels, and team cultures will demand different approaches. But understanding the trade-offs is key. Eliminating Agile leadership may save salary dollars, but it can cost far more in lost alignment, missed improvement opportunities, and team degradation over time.

Key Responsibilities of the Agile Delivery Manager

  • Agile Leadership & Flow-Based Delivery
  • Facilitate planning, stand-ups, retrospectives, and production reviews
  • Align teams around roles and responsibilities and work across the value stream
  • Champion flow efficiency by removing bottlenecks and managing work intake
  • Foster psychological safety, trust, and continuous learning
  • Value Stream Management, Flow Engineering, and Flow Metrics Optimization
  • Lead monthly Flow Metrics reviews to help teams surface and resolve inefficiencies
  • Track Flow Time, Efficiency, Load, Velocity, and Distribution
  • Ensure investment in tech debt, security, and sustainability, not just features
  • Cross-Team Collaboration & Dependency Management
  • Align Product, Engineering, and Agile leadership
  • Coordinate across teams to manage dependencies and reduce delivery friction
  • Partner with Platform and Production Engineering teams for smoother execution

The Unicorn Problem: Why Overloading Other Roles Fails

Some argue that Product and Engineering Managers can take on these additional responsibilities, but at what cost?

The industry already struggles to fill these roles with strong candidates. When you ask one person to manage delivery flow, facilitate team dynamics, coach culture, drive Agile execution, and lead strategy, you create what I call the “unicorn problem.”

  • T-Shaped Leaders = Deep expertise in one area + a broad understanding of others
  • V-Shaped Leaders = Deep expertise in everything (engineering, Agile, customer insight, facilitation, coaching, metrics, and more)

Unicorns exist but rarely and not for long. Overloading these roles doesn’t set anyone up for success.

Should You Drop the dedicated Scrum Master or Agile Leader Role?

Most organizations still have a Scrum Master or equivalent Agile role, but some are experimenting with eliminating it in favor of shared responsibilities.

While this can work in some instances, our experience proves that a dedicated Agile leadership role improves:

  • Delivery flow efficiency
  • Business alignment
  • Sustainable team execution
  • Psychological safety and culture

So before you eliminate the role, ask: Who on your team is incentivized to prioritize delivery balance across features, tech debt, security, and defects? If no one owns that responsibility, it’s likely no one is doing it well.

Again, I’m not prescribing a one-size-fits-all answer. I’m sharing what I’ve seen in practice, from teams that struggled without this role to high-performing teams that outgrew it to the evolution of the ADM as a critical driver of system-wide value delivery. The key is clarity of purpose and accountability, no matter the model.

“Agile leaders don’t just guide their teams. They protect and improve the entire delivery system. They play a key role in ensuring its integrity and success. They are the guardians of the delivery system.” – Phil Clark

What’s your experience?

  • Has your organization eliminated this role?
  • If so, what impact has it had?
  • Should Agile execution be absorbed by Engineering and Product Managers?

Let’s keep the conversation going.

Related Posts

  • From Good to Great: Shifting to Outcomes in 2025, January 2025.
  • Beyond Facilitation: The Agile Leader’s Place in Cross-Functional Team Dynamics, February 2024.
  • Agile Software Delivery: Unlocking Your Team’s Full Potential. It’s not the Product Owner, December 2022.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact