• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Practices and Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Metrics

We Have Metrics. Now What?

May 11, 2025 by philc

6 min read

A Guide for Legacy-Minded Leaders on Using Metrics to Drive the Right Behavior

From Outputs to Outcomes

A senior executive recently asked a VP of Engineering and the Head of Architecture for industry benchmarks on Flow Metrics. At first, this seemed like a positive step, shifting the focus from individual output to team-level outcomes. However, the purpose of the request raised concerns. These benchmarks were intended to evaluate engineering managers’ performance for annual reviews and possibly bonuses.

That’s a problem. Using system-level metrics to judge individual performance is a common mistake. It might look good on paper but often hides deeper issues. This approach is for senior leaders adopting team-level metrics who want to use them effectively. You’ve chosen better metrics. Now, let’s make sure they work as intended. It risks turning system-level signals into personal scorecards, creating the dysfunction these metrics are meant to reveal and improve. Using metrics this way negates their value and invites gaming over genuine improvement.

To clarify, the executive’s team structure follows the Engineering Manager (EM) model, where EMs are responsible for the performance of the delivery team. In contrast, I support an alternative approach with autonomous teams built around team topologies. These teams include all the roles needed to deliver value, without a manager embedded in the team. These are two common but very different models of team structure and performance evaluation.

This isn’t the first time I’ve seen senior leaders misuse qualitative metrics, and it likely won’t be the last. So I asked myself: Now that more leaders have agreed to adopt the right metrics, do they know how to use them responsibly?

I will admit that I was frustrated to learn of this request, but the event inspired me to create a guide for leaders, especially those used to traditional, output-focused models who are new to Flow Metrics and team-level measurement. You’ve picked the right metrics; now comes the challenge: using them effectively without causing harm.

How to Use Team Metrics Without Breaking Trust or the System

1. Start by inviting teams into the process

  • Don’t tell them, “Flow Efficiency must go up 10%.”
  • Ask instead: “Here’s what the data shows. What’s behind this? What could we try?”

Why: Positive intent. Teams already want to improve. They’ll take ownership if you bring them into the process and give them time and space to act. Top-down mandates might push short-term results, but they usually kill long-term improvement.

2. Understand inputs vs. outputs

  • Output metrics (like Flow Time, PR throughput, or change failure rate) are results. You don’t control them directly.
  • Input metrics (like review turnaround time or number of unplanned interruptions) reflect behaviors teams can change.

Why: If you set targets on outputs, teams won’t know what to do. That’s when you get gaming or frustration. Input metrics give teams something they can improve. That’s how you get real system-level change.

I’ve been saying this for a while, and I like how Abi Noda and the DX team explain it: input vs. output metrics. It’s the same thing as leading vs. lagging indicators. Focus on what teams can influence, not just what you want to see improve.

3. Don’t turn metrics into targets

When a measure becomes a target, it stops being useful.

  • Don’t turn system health metrics into KPIs.
  • If people feel judged by a number, they’ll focus on making the number look good instead of fixing the system.

Why: You’ll get shallow progress, not real change. And you won’t know the difference because the data will look better. The cost? Lost trust, lower morale, and bad decisions.

4. Always add context

  • Depending on the situation, a 10-day Flow Time might be great or terrible.
  • Ask about the team’s product, the architecture, the kind of work they do, and how much unplanned work they handle.

Why: Numbers without context are misleading. They don’t tell the story. If you act on them without understanding what’s behind them, you’ll create the wrong incentives and fix the bad things.

5. Set targets the right way

  • Not every metric needs a goal.
  • Some should trend up; others should stay stable.
  • Don’t use blanket rules like “improve everything by 10%.”

Why: Metrics behave differently. Some take months to move. Others can be gamed easily. Think about what makes sense for that metric in that context. Real improvement takes time; chasing the wrong number can do more harm than good.

6. Tie metrics back to outcomes and the business

  • Don’t just say, “Flow Efficiency improved.” Ask, what changed?
    • Did we deliver faster?
    • Did we reduce the cost of delay?
    • Did we create customer value?

If you’ve read my other posts, I recommend tying every epic and initiative to an anticipated outcome. That mindset also applies to metrics. Don’t just look at the number. Ask what value it represents.

Also, it’s critical that teams use metrics to identify their bottleneck. That’s the key. Real flow improvement comes from fixing the biggest constraint.

If you improve something downstream of the bottleneck, you’re not improving flow. You’re just making things look better in one part of the system. It’s localized and often a wasted effort.

Why: If the goal is better business outcomes, you must connect what the team does with how it moves the needle. Metrics are just the starting point for that conversation.

7. Don’t track too many things

  • Stick to 3-5 input metrics at a time.
  • Make these part of retrospectives, not just leadership dashboards.

Why: Focus drives improvement. If everything is a priority, nothing is. Too many metrics dilute the team’s energy. Let them pick the right ones and go deep.

8. Build a feedback loop that works

  • Metrics are most useful when teams review them regularly.
  • Make time to reflect and adapt.

We’re still experimenting with what cadence works best. Right now, monthly retrospectives are the minimum. That gives teams short feedback loops to adjust their improvement efforts. A quarterly check-in is still helpful for zooming out. Both are valuable. We’re testing these cycles, but they give teams enough time to try, reflect, and adapt.

Why: Improvement requires learning. Dashboards don’t improve teams. Feedback does. Create a rhythm where teams can test ideas, measure progress, and shift direction.

A Word of Caution About Using Metrics for Performance Reviews

Some leaders ask, “Can I use Flow Metrics to evaluate my engineering managers?” You can, but it’s risky.

Flow Metrics tell you how the system is performing. They’re not designed to evaluate individuals. If you tie them to bonuses or promotions, you’ll likely get:

  • Teams gaming the data
  • Managers focus on optics, not problems
  • Reduced trust and openness

Why: When you make metrics part of a performance review, people stop using them for improvement. They stop learning. They play it safe. That hurts the team and the system.

Here’s what you can do instead:

  • Use metrics to guide coaching conversations, not to judge.
  • Evaluate managers based on how they improve the system and support their teams.
  • Reward experimentation, transparency, and alignment to business value.

Performance is bigger than one number. Metrics help tell the story, but they aren’t the story.

Sidebar: What if Gamification Still Improves the Metric?

I’ve heard some folks say, “I’m okay with gamification; if the number gets better, the team gets better.”

I get where they’re coming from. Sometimes, gamifying a number can move. But here’s the problem:

  1. It often hides the real issues.
  2. It encourages people to optimize for appearances, not outcomes.
  3. It breaks the feedback loop you need to find the real constraints.
  4. It builds a culture of avoidance instead of learning.

So, while gamification might improve the score, it doesn’t constantly improve the system and rarely as efficiently as intentional, transparent work on the problem.

If the goal is long-term performance, trust the process. Let teams learn from the data. Don’t let the number become the mission.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

A Self-Guided Performance Assessment for Agile Delivery Teams

May 3, 2025 by philc

10 min read

This article started with a conversation.

One of my engineering managers pointed out a troubling pattern. Teams continued to deliver even when builds failed or tests flaked, and they did so without pausing or showing any curiosity.

“The pipeline is failing, but no one’s looking at it.”

He noticed some engineers weren’t following their code after it went into production. They weren’t checking logs or using observability tools. Meanwhile, others followed through, checking metrics, monitoring logs, and validating behavior. Then he asked:

“We do performance reviews for individuals; why not for teams?”

And it struck me: we don’t, not in any meaningful, structured way. We hold individuals accountable for how they show up, but teams,  the unit that delivers value, are too often assessed only by their outputs, not their behaviors.

This article isn’t a proposal. It’s a prompt.

A Reflection More Than a Framework

This isn’t a manager’s tool or a leadership scorecard. It’s a guide for teams looking to improve how they collaborate with purpose. It’s for delivery teams that value their habits just as much as their results.

Use it as a retro exercise. A quarterly reset. A mirror.

Why Team Reflection Matters

We already measure delivery performance. DORA. Flow. Developer Experience.
But those metrics don’t always answer:

  • Are we doing what we said mattered , like observability and test coverage?
  • Are we working as a team or as individuals executing in parallel?
  • Do we hold each other accountable for delivering with integrity?

This is the gap: how teams work together. This guide helps fill it , not to replace metrics, but to deepen the story they tell.

What This Is (And Isn’t)

You might ask: “Don’t SAFe, SPACE, DORA, or Flow Metrics already do this?”
Yes and no. Those frameworks are valuable. But they answer different questions:

  • DORA & Flow: How fast and stable is our delivery?
  • DX Core 4 & SPACE: How do developers feel about their work environment?
  • Maturity Models: How fully have we adopted Agile practices?
  • For organizations implementing SAFe, SAFe’s Measure and Grow evaluate enterprise agility in dimensions such as team agility, product delivery, and lean leadership.

What they don’t always show is:

  • Are we skipping discipline under pressure?
  • Do we collaborate across roles or operate in silos?
  • Are we shipping through red builds and hoping for the best?

But the question stuck with me:
Shouldn’t we do the same for teams if we hold individuals accountable for how they show up?

What follows is a framework and a conversation starter, not a mandate. It’s just something to consider because, in many organizations, teams are where the real impact (or dysfunction) lives.

Suggested Team Reflection Dimensions

You don’t need to use all twelve categories. Start with the ones that matter most to your team, or define your own. This section is designed to help teams reflect on how they work together, not just what they deliver.

But before diving into individual dimensions, start with this simple but powerful check-in.

Would We Consider Ourselves Underperforming, Performing, or High-Performing?

This question encourages self-awareness without any external judgment. The team should decide together: no scorecards, no leadership evaluations, just a shared reflection on your experience as a delivery team.

From there, explore:

  • What makes us feel that way?
    What behaviors, habits, or examples support our self-assessment?
  • What should we keep doing?
    What’s already working well that we want to protect or double down on?
  • What should we stop doing?
    What’s causing friction, waste, or misalignment?
  • What should we start doing?
    What’s missing that could improve how we operate?

This discussion often surfaces more actionable insight than metrics alone. It grounds the assessment in the team’s shared experience and sets the tone for improvement, not judgment.

A Flexible Self-Evaluation Scorecard

While this isn’t designed as a top-down performance tool, teams can use it as a self-evaluation scorecard if they choose. The reflection tables that follow can help teams:

  • Identify where they align today: underperforming, performing, or high-performing.
  • Recognize the dimensions where they accelerate and where they have room to improve.
  • Prioritize the changes that will have the greatest impact on how they deliver.

No two teams will see the same patterns, and that’s the point. Use the guidance below not as a measurement of worth but as a compass to help your team navigate toward better outcomes together.

The 12-Dimension Agile Team Performance Assessment Framework

These dimensions serve as valuable tools for self-assessments, retrospectives, or leadership reviews, offering a framework to evaluate not just what teams deliver, but how effectively they perform.

  1. Collaboration & Communication
  2. Planning & Execution
  3. Data-Driven Improvement
  4. Code Quality & Technical Health
  5. Observability & Operational Readiness
  6. Flow & Efficiency
  7. Customer & Business Focus
  8. Role Clarity & Balanced Decision-Making
  9. Business-Technical Integration
  10. Engineering Discipline & Best Practice Adoption
  11. Accountability & Delivery Integrity
  12. Capabilities & Adaptability

These dimensions help teams focus not just on what they’re delivering but also on how their work contributes to long-term success.

Quick Reflection Table

This simple table is a great way to start conversations. It works well for retrospectives, quarterly check-ins, or when something feels off. Each category includes a key question and signs that may indicate your team is facing challenges in that area.

Collaboration & Communication
Reflection Prompt: How do we share knowledge? Are we aligned?
Signs of struggle: Silos, missed handoffs, unclear ownership.

Planning & Execution
Reflection Prompt: Do we plan realistically and deliver what we commit?
Signs of struggle: Reactive work, missed goals, poor forecasts.

Data-Driven Improvement
Reflection Prompt: Are we learning from our metrics?
Signs of struggle: Metrics ignored, retros skipped or repetitive.

Code Quality & Technical Health
Reflection Prompt: Is quality a first-class value for us?
Signs of struggle: Bugs, manual processes, growing technical debt.

Observability & Operational Readiness
Reflection Prompt: Can we detect and understand issues quickly?
Signs of struggle: No alerts, users discover issues first.

Flow & Efficiency
Reflection Prompt: How well does work move through our system?
Signs of struggle: Bottlenecks, context switching, unfinished work piling up.

Customer & Business Focus
Reflection Prompt: Do we understand the “why” behind our work?
Signs of struggle: Features misaligned with outcomes, limited feedback loops.

Role Clarity & Decision-Making
Reflection Prompt: Do we decide together and share ownership?
Signs of struggle: Decisions dominated by one role, unclear priorities.

Business-Technical Integration
Reflection Prompt: Are we balancing product delivery with long-term tech health?
Signs of struggle: Short-term focus, ignored debt, rework needed later.

Engineering Discipline
Reflection Prompt: Are we consistently applying best practices?
Signs of struggle: Skipped testing, unstable releases, fragile systems.

Delivery Integrity & Accountability
Reflection Prompt: Can we be counted on, and are we transparent about risks?
Signs of struggle: Missed deadlines, surprise failures, hidden blockers.

Capabilities & Adaptability
Reflection Prompt: Are we equipped to handle what’s coming?
Signs of struggle: Skill gaps, delays, high dependency on other teams.

How this appears in table format:

Detailed Assessment Reference

For teams looking for more detail, the next section breaks down each reflection category. It explains what “Not Meeting Expectations,” “Meeting Expectations,” and “Exceeding Expectations” look like in practice.

Collaboration & Communication

  • Not Meeting Expectations: Team works in silos; poor knowledge sharing.
  • Meeting Expectations: Team shares knowledge openly and communicates effectively.
  • Exceeding Expectations: Team drives cross-role collaboration and creates shared clarity.

Planning & Execution

  • Not Meeting Expectations: No planning rhythm; commitments missed; reactive delivery.
  • Meeting Expectations: Regular planning practices; predictable and consistent delivery.
  • Exceeding Expectations: Clear, realistic planning; consistently meets and adapts with agility.

Data-Driven Improvement

  • Not Meeting Expectations: Rarely reviews metrics; decisions based on opinion.
  • Meeting Expectations: Regularly uses metrics to inform retrospectives and improvements.
  • Exceeding Expectations: Metrics drive experimentation and continuous learning.

Code Quality & Technical Health

  • Not Meeting Expectations: High defect rates; test automation and refactoring ignored.
  • Meeting Expectations: Code reviews and basic testing in place; some debt is tracked.
  • Exceeding Expectations: Quality is a shared team value; sustainable architecture and coverage prioritized.

Observability & Operational Readiness

  • Not Meeting Expectations: No monitoring; users report issues first.
  • Meeting Expectations: Monitoring and alerting in place; incidents reviewed post-mortem.
  • Exceeding Expectations: Observability is embedded; teams detect issues early and improve proactively.

Flow & Efficiency

  • Not Meeting Expectations: Work blocked or idle; slow feedback loops.
  • Meeting Expectations: WIP is controlled; team tracks and removes some blockers.
  • Exceeding Expectations: The team optimizes flow across stages; constraints are resolved quickly.

Customer & Business Focus

  • Not Meeting Expectations: Features shipped without validation or connection to outcomes.
  • Meeting Expectations: Team understands business goals and loosely connects work to them.
  • Exceeding Expectations: Customer value drives prioritization; teams iterate based on delivery results.

Role Clarity & Decision-Making

  • Not Meeting Expectations: Decisions are top-down; no shared ownership.
  • Meeting Expectations: Product and engineering collaborate and plan together.
  • Exceeding Expectations: Teams co-own decisions with transparent tradeoffs and joint accountability.

Business-Technical Integration

  • Not Meeting Expectations: Tech health is deprioritized or ignored.
  • Meeting Expectations: Technical and business needs are considered during planning.
  • Exceeding Expectations: Resilience, scalability, and future value are built into every delivery conversation.

Engineering Discipline

  • Not Meeting Expectations: Testing, security, and deployment planning are skipped.
  • Meeting Expectations: The team follows most standard practices, even under pressure.
  • Exceeding Expectations: Best practices are applied consistently; technical health is never optional.

Delivery Integrity & Accountability

  • Not Meeting Expectations: Team misses commitments and avoids responsibility.
  • Meeting Expectations: Most commitments are met; blockers are raised early.
  • Exceeding Expectations: The team demonstrates high integrity and owns outcomes, not just tasks.

Capabilities & Adaptability

  • Not Meeting Expectations: Gaps in skills go unaddressed; team lacks coverage.
  • Meeting Expectations: The team has the necessary skills for current work and knows when to ask for help.
  • Exceeding Expectations: The team evolves proactively, cross-trains, and adapts with resilience.

How this appears in table format:

These tools are meant to start conversations. Use them as a guide, not a strict scoring system, and revisit them as your team grows and changes. High-performing teams regularly reflect as part of their routine, not just occasionally. These tools are designed to help you build that habit intentionally.

How to Use This and Who Should Be Involved

This framework isn’t a performance review. It’s a reflection tool designed for teams to assess themselves, clarify their goals, and identify areas for growth.

Here’s how to make it work:

1. Run It as a Team

Use this framework during retrospectives, quarterly check-ins, or after a major delivery milestone. Let the team lead the conversation. They’re closest to the work and best equipped to evaluate how things feel.

The goal isn’t to assign grades. It’s to pause, align, and ask: How are we doing?

2. Make It Yours

There’s no need to use all twelve dimensions. Start with the ones that resonate most. You can rename them, add new ones, or redefine what “exceeding expectations” look like in your context.

The more it reflects your team’s values and language, the more powerful the reflection becomes.

3. Use Metrics to Support the Story, Not Replace It

Delivery data like DORA, Flow Metrics, or Developer Experience scores can add perspective. But they should inform, not replace the conversation. Numbers are helpful, but they don’t speak for how it feels to deliver work together. Let data enrich the dialogue, not dictate it.

4. Invite Broader Perspectives

Some teams can gather anonymous 360° feedback from stakeholders or adjacent teams surfacing blind spots and validate internal perceptions.

Agile Coaches or Delivery Leads can also bring an outside-in view, helping the team see patterns over time, connecting the dots across metrics and behaviors, and guiding deeper reflection. Their role isn’t to evaluate but to support growth.

5. Let the Team Decide Where They Stand

As part of the assessment, ask the team:
Would we consider ourselves underperforming, performing, or high-performing?Then explore:

  • What makes us feel that way?
  • What should we keep doing?
  • What should we stop doing?
  • What should we start doing?

These questions give the framework meaning. It turns observation into insight and insight into action.

This Is About Ownership, Not Oversight

This reflection guide and its 12 dimensions can serve as a performance management tool, but I strongly recommend using it as a check-in resource for teams. It’s designed to build trust, encourage honest conversations, and offer a clear snapshot of the team’s current state. When used intentionally, it enhances team cohesion and strengthens overall capability. For leaders, focusing on recurring themes rather than individual scores reveals valuable patterns that can inform coaching efforts rather than impose control. Adopting it is in your hands and your team’s.

Final Thoughts

This all started with a conversation and a question: “We do performance reviews for individuals, but what about teams?” If we care about how individuals perform, shouldn’t we also care about how teams perform together?

High-performing teams don’t happen by accident. They succeed by focusing on both what they deliver and how they deliver it.

High-performing teams don’t just meet deadlines, they adapt, assess themselves, and improve together. This framework provides them with a starting point to make that happen.


Related Articles

If you found this helpful, here are a few related articles that explore the thinking behind this framework:

  • From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics
  • Navigating the Digital Product Workflow Metrics Landscape: From DORA to Comprehensive Value Stream Management Platform Solutions

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

From Scrum Master to Agile Delivery Manager: Evolution in the Age of Flow

April 14, 2025 by philc

6 min read

This post was inspired by a LinkedIn post shared by Dave Westgarth.

In 2025, we formally changed the title of Scrum Master to Agile Delivery Manager (ADM) in our technology division. This renaming wasn’t a rebrand for the sake of optics. It reflected a deeper evolution already happening, rooted in the expanding scope of delivery leadership, the adoption of Flow Metrics and Value Stream Management, and our real-world shift from strict Scrum toward a more customized Kanban-based model.

It was this year that the name finally clicked. After assigning Value Stream Architect responsibilities to our Scrum Masters and giving them ownership of delivery metrics, team-level delivery health, and collaboration across roles within their Agile team, I realized the title “Scrum Master” no longer fit their role. I even considered Agile Value Stream Manager, but it felt too narrow and platform-specific.

That’s when Agile Delivery Manager stood out, not only as a better label but also as a more accurate reflection of the mindset and mission.

I’m not alone in this. My wife, a Scrum Master, noticed a rise in Agile Delivery Manager roles. These roles are emerging as a natural evolution of the Scrum Master role, broader in scope but still grounded in servant leadership and Agile values. This shift is becoming more common across industries.

Why We Made the Change

This wasn’t an overnight decision—it was the culmination of years of observing the gap between traditional agile roles and modern delivery demands. I’ve written extensively about the evolving nature of delivery roles in the modern product and engineering ecosystem. In “Navigating the Digital Product Workflow Metrics Landscape,” I highlighted how organizations that have matured beyond Agile 101 practices shift their attention upstream toward value creation, flow efficiency, and business impact.

In that article, I shared:

“Organizations that have invested in high automation, eliminated waste, and accelerated CI/CD cycles are now shifting left—seeking broader visibility from idea to operation.”
– Navigating the Digital Product Workflow Metrics Landscape

Similarly, in “Dependencies Are Here to Stay,” I discussed why frameworks couldn’t box delivery leadership in:

“We can’t measure agility in isolation. Dependencies are part of the system, not a failure of it. Leadership roles must evolve to manage flow across those dependencies, not just within a team board.”

This evolution is what our former Scrum Masters were doing. They were coaching teams and guiding delivery conversations, navigating delivery risks, managing stakeholder expectations, and tracking systemic flow. The title needed to grow with the responsibility.

The Agile Role That Connects It All

Agile leadership roles and responsibilities vary across organizations. Some have Scrum Masters or Agile Leaders, while others use titles like Technical Project Manager or Agile Coach. In some cases, responsibilities shift to Engineering or Product Managers, and some companies distribute these duties among team members and eliminate the role entirely. Despite these differences, we believe a dedicated Agile leadership position is valuable. This role plays a key part in improving team performance, delivery efficiency, and optimizing workflows.


The Agile Delivery Manager role is unique in that it is the only role on the team not incentivized by a specific type of work.

  • Product Managers focus on growth and prioritize new features.
  • Technical Leads concentrate on architecture and managing technical debt.
  • Information Security leaders work to reduce security risks.
  • QA teams ensure defects are identified and fixed.

The Agile Delivery Manager operates at a higher level, overseeing workflow across the distribution of work types, including features, technical debt, risks, and defects. It fosters continuous team improvement while ensuring that deliveries consistently drive tangible business value.

Inside the Agile Delivery Manager Role

It’s worth clarifying: In our model, Agile Delivery Managers remain focused on their assigned Agile team or teams. While the title may sound broader, the role is not intended to operate across multiple delivery teams or coordinate program-level work. Instead, ADMs guide and improve the delivery flow within their own team context—coaching the team, optimizing its workflow, and partnering with product and engineering to ensure value is delivered efficiently.

Here’s how we now define the Agile Delivery Manager in our updated job description:

“As an Agile Delivery Manager, you’ll lead strategic transformation, champion Flow Metrics and VSM, and shape how teams deliver real business value.”

Key responsibilities include:

  • Agile Leadership & Flow-Based Delivery
    Coaching teams while enabling clarity, cadence, and sustainability in customized Kanban-style systems.
  • Team Collaboration & Dependency Management
    Collaborating with Product, QA, InfoSec, and Engineering roles within the team to resolve blockers, ensure quality, and maintain delivery flow.
  • Flow Metrics & Value Stream Optimization
    Leading metric reviews using Flow Time, Load, Efficiency, and Distribution to drive better delivery outcomes.
  • Value Stream Architecture
    Acting as system-level delivery architects, not of code, but of how work flows from concept to value.
  • Strategic Reporting & Outcome Alignment
    Building quarterly delivery reports that tie execution to business value, supporting leadership visibility and continuous improvement.

This role no longer fits the narrow scope that Scrum once offered. It combines delivery leadership, agile stewardship, and flow optimization.

What This Means for Scrum Masters

If you’re a Scrum Master wondering what’s next, you’re not alone. You’re likely doing many things, but this role demands time to widen the lens.

As Dave Westgarth shared on LinkedIn:

“You’re using the same core competencies: facilitation, servant leadership, coaching, and team empowerment. They just get applied at different levels and from different perspectives.”

This evolution isn’t about abandoning Agile. It’s about scaling its intent.

Many of our ADM team members still value their strong Scrum foundation. However, they’ve broadened their focus to improve delivery efficiency, enhance team coordination, manage delivery risks, and ensure smooth team workflows across competing work types and stakeholder needs.

If you’re already guiding delivery beyond team ceremonies, influencing system flow, and navigating complexity, this evolution is your next chapter.

Final Thoughts

The shift to an Agile Delivery Manager reflects a modern reality: frameworks alone don’t scale agility; people do. The ADM role honors the coaching mindset of the Scrum Master while embracing the delivery complexities of today’s hybrid, platform-heavy, and outcome-driven organizations.

For our division, the name change signaled to our teams and business stakeholders that delivery leadership had evolved. More importantly, it gave our people permission to grow into that evolution.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable

April 12, 2025 by philc

9 min read

Connect the dots: Show how engineering efforts drive business impact by linking their work to key organization metrics and outcomes. Highlight their value and contribution to overall success.

When Sarcasm Reveals Misalignment

Last week, one of my Agile Delivery Leaders brought forward a concern from her team that spoke volumes, not just about an individual, but about the kind of tension that quietly persists in even the most mature organizations.

She asked her team to define the expected outcome for a new Jira Epic, a practice I’ve asked all teams to adopt to ensure software investments align with business goals. However, it seems they struggled to identify the anticipated outcome. On top of that, a senior team member who’s been part of our transformation for years dismissed the idea instead of contributing to the discussion. She found herself in a difficult position, torn between the leader’s authority and her own responsibilities. He commented something like:

“Why are we doing this? This is stupid. Phil read another book, and suddenly we’re all expected to jump on board.”

When I first heard that comment secondhand, I felt a wave of anger; it struck me as pure arrogance. This leader chose not to share his perspective with me directly, perhaps for reasons he deemed valid. But as I thought more about it, I realized it wasn’t arrogance at all, but ignorance. Not malicious ignorance, but the kind that comes from discomfort, uncertainty, or an unwillingness to admit they no longer understand or align with where things are going. Comments like that are often defense mechanisms. They mask deeper resistance, reveal a lack of clarity, or quietly question whether someone still fits into a system evolving beyond their comfort zone.

This wasn’t about rejecting change or progress; it was about pushing back against how we’re evolving. Moments like this remind us that true transformation isn’t just about forging ahead; it’s about fostering belief and alignment in mindset and actions as we move forward.

Purpose-Driven Development: My Approach to Sustainable Alignment

I asked teams to define anticipated outcomes not to add overhead but to protect the integrity of the way we build software.

Over the past decade, I’ve worked hard to lead engineering our teams and organization out of the “feature factory” trap, where the focus is on output volume, velocity, and shipping for the sake of shipping. Through that experience, I developed Purpose-Driven Development (PDD), my definition of this term.

Purpose-driven development might sound like a buzzword, but it’s how we bring Agile and Lean principles to life. It ensures delivery teams aren’t just writing code; they’re solving the right problems for the right reasons with clear goals and intentions.

PDD is built on one core idea: every initiative, epic, and sprint should be based on a clear understanding of why it matters.

Anticipated Outcomes: A Small Practice That Changes Everything

To embed this philosophy into our day-to-day work, we introduced a simple yet powerful practice:

Every Epic or Initiative must include an “Anticipated Outcome.”

Just a sentence or two that answers:

  • What are we hoping to achieve by doing this work?
  • How will it impact the customer, the Business, or the platform?

We don’t expect perfection. We expect intention. The goal isn’t to guarantee results but to anchor the work in a hypothesis that can be revisited, challenged, or learned from.

This simple shift creates:

  • Greater alignment between teams and strategy
  • More meaningful prioritization
  • Opportunities to reflect on outcomes, not just outputs
  • Visibility across leadership into what we’re investing in

Who Might Push Back and Why That’s Okay

When we ask teams to define anticipated outcomes, it’s not about creating friction; it’s about creating focus. And this shouldn’t feel like a burden to most of the team.

I believe engineers will welcome it. Whether they realize it at first or not, this clarity gives them purpose. It ties their daily work to something that matters beyond code.

The only two roles I truly expect might feel frustration when asked to define anticipated outcomes are:

Product Managers and Technical Leaders.

And even that frustration? It’s understandable.

Product Managers often experience pain from not being involved early enough in the ideation or problem-definition stage. They may not know the anticipated outcome if they’re handed priorities from a higher-level product team without the context or autonomy to shape the solution. And that’s the problem, not the question itself, but the absence of trust and inclusion upstream.

For Technical Leaders, it often comes when advocating for tech debt work. They know the system needs investment but struggle to translate that into a clear business reason. I get it; it’s frustrating when you know the consequences of letting entropy creep in, but you haven’t been taught to describe that impact in terms of business value, customer experience, or system performance.

But that’s exactly why this practice matters.

Asking for an anticipated outcome isn’t a punishment. It’s an exercise in alignment and clarity. And if that exercise surfaces frustration, that’s not failure. It’s the first step toward better decision-making and stronger cross-functional trust.

Whether it’s advocating for feature delivery or tech sustainability, we can’t afford to work in a vacuum. Every initiative, whether shiny and new or buried in system debt, must have a reason and a result we’re aiming for.

Anticipated Outcomes First, But OKR Alignment Is the Future

When I introduced the practice of documenting anticipated outcomes in every Epic or Initiative, I also asked for something more ambitious: a new field in our templates to capture the parent OKR or Key Result driving the work.

The goal was simple but powerful:

If we claim to be an outcome-driven organization, we should know what outcome we’re aiming for and where it fits in our broader strategy.

I aimed to help teams recognize that their Initiatives or Epics could serve as team-level Key Results directly tied to overarching business objectives. After all, this work doesn’t appear by chance. It’s being prioritized by Product, Operations, or the broader Business for a deliberate purpose: to drive progress and advance the company’s goals.

But when I brought this to our Agile leadership group, the response was clear: this was too much to push simultaneously.

Some teams didn’t know the parent KR, and some initiatives weren’t tied to a clearly articulated OKR. Our organizational OKR structure was often incomplete, and we were missing the connective tissue between top-level objectives and team-level execution.

And they were right.

We’re still maturing in how we connect strategy to delivery. For many teams, asking for the anticipated outcome and the parent OKR at once felt like a burden, not a bridge.

So, we paused the push for now. My focus remains first on helping teams articulate the anticipated outcome. That alone is a leap forward. As we strengthen that muscle, I’ll help connect the dots upward, mapping team efforts to the business outcomes they drive, even if we don’t have the complete OKR infrastructure yet.

Alignment starts with clarity. And right now, clarity begins with purpose.

Without an anticipated outcome, every initiative is a dart thrown in the dark.

It might land somewhere useful or waste weeks of productivity on something that doesn’t matter.

Documenting the outcome gives us clarity and direction. It means we’re making strategic moves, not random ones. And it reduces the risk of high-output teams being incredibly productive… at the wrong thing.

Introducing the Feature Factory Ratio

To strengthen our focus on PDD and prioritize outcomes over outputs, we are introducing a new core insights metric as part of our internal diagnostics:

Feature Factory Ratio (FFR) =

(Number of Initiatives or Epics without Anticipated Outcomes / Total Number of Initiatives or Epics) × 100

The higher the ratio, the greater the risk of operating like a feature factory, moving fast but potentially delivering little that matters.

The lower the ratio, the more confident we can be that our teams are connecting their work to value.

This ratio isn’t about micromanagement, it’s about organizational awareness. It tells us where alignment is breaking down and where we may need to revisit how we communicate the “why” behind our work.

Why We Call It the Feature Factory Ratio

When I introduced this metric, I considered several other names:

  • Outcome Alignment Ratio – Clear and descriptive, but lacking urgency
  • Clarity of Purpose Index – Insightful, but a bit abstract
  • Value Connection Metric – Emphasizes intent, but sounds like another analytics KPI

Each option framed the idea well, but they didn’t hit the nerve I wanted to expose.

Ultimately, I chose the Feature Factory Ratio because it speaks directly to the cultural pattern we’re trying to break.

It’s provocative by design. It challenges teams and leaders to ask, “Are we doing valuable work or just shipping features?” It turns an abstract concept into a visible metric and surfaces conversations we must have when our delivery drifts from our strategy.

Sometimes, naming things with impact helps us lead the behavior change that softer language can’t.

Sidebar: Superficial Alignment, The Silent Threat

One of the biggest leadership challenges in digital transformation isn’t open resistance, it’s superficial alignment.

These senior leaders attend the workshops, adopt the lingo, and show up to the town halls, but when asked to change how they work or lead, they bristle. They revert. They roll their eyes or make sarcastic comments.

But they’re really saying: I’m not sure I believe in this, or I don’t know how I fit anymore.

The danger is: superficial alignment looks like progress, but it blocks true transformation. It creates cultural drag. It confuses teams and weakens momentum.

Moments like the one I shared remind me that transformation isn’t a checkbox but a leadership posture. And sometimes, those sarcastic comments? They’re your clearest sign of where real work still needs to happen.

Start Where You Are and Grow from There

We’re all at different points in our transformation journeys as individuals, teams, and organizations.

So, instead of reacting with frustration when someone can’t articulate an outcome or when a snide remark surfaces resistance, use it as a signal.

Meet your team where they are. Use every gap as a learning opportunity, not a leadership failure.

If a team can’t answer “What’s the anticipated outcome?” today, help them start asking it anyway. The point isn’t to have every answer right now. It’s to build the muscle so that someday, we will.

These questions aren’t meant to judge where we are. They’re meant to guide us toward where we’re trying to go, and this is the Work of Modern Software Leadership.

It’s easy to say we want to be outcome-driven. Embedding that belief into daily practice is harder, especially when senior voices or legacy habits push back.

But this is the work:

  • Aligning delivery to strategy
  • Teaching teams to think in terms of impact
  • Holding the line on purpose—even when it’s uncomfortable
  • Measuring not just what we ship but why we’re shipping it

Yes, I’ve read my fair share of books. Along the way, I’ve experienced key moments and expected outcomes that influenced my journey in adopting new initiatives within our division and organization, such as Value Stream Management and understanding what it means to deliver real value. I’ve led teams through transformation and seen what works. From my experience in our organization and working with other industry leaders, I’ve learned that software delivery with a clear purpose is more effective, empowering, and valuable for the Business, our customers, and the teams doing the work.


Leader’s Checklist: Outcome Alignment in Agile Teams

Use this checklist to guide your teams and yourself toward delivering work that matters.

1. Intent Before Execution

  • Is every Epic or Initiative anchored with a clear Anticipated Outcome?
  • Have we stated why this work matters to the customer, business, or platform?
  • Are we avoiding the trap of “just delivering features” without a defined end state?

2. Strategic Connection

  • Can this work be informally or explicitly tied to a higher-level Key Result, business goal, or product metric?
  • Are we comfortable asking, “What is the business driver behind this work?” even if it’s not written down yet?

3. Team-Level Awareness

  • Do developers, QA, and designers understand the purpose behind what they’re building?
  • Can the team articulate what success looks like beyond “we delivered it”?

4. Product Owner Empowerment

  • Has the Product Manager or Product Owner been involved in problem framing, or were they handed a solution from above?
  • Is that a signal of upstream misalignment if they seem disconnected from the outcome?

5. Tech Debt with Purpose

  • If the work is tech debt, have we articulated its impact on system reliability, scalability, or risk?
  • Can we tie this work back to customer experience, transaction volume, or long-term business performance?

6. Measurement & Reflection

  • Are we tracking how many Initiatives or Epics lack anticipated outcomes using the Feature Factory Ratio?
  • Do we ever reflect on anticipated vs. actual outcomes once work is delivered?

7. Cultural Leadership

  • Are we reinforcing that asking, “What’s the anticipated outcome?” is about focus, not control?
  • When we face resistance or discomfort, are we leading with curiosity instead of compliance?

Remember:

Clarity is a leadership responsibility.

If your teams don’t know why they’re doing the work, the real problem is upstream, not them.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Flow Retreat 2025: Practicing the Work Behind the Work

March 29, 2025 by philc

4 min read

The Flow Leadership Retreat was the vision of Steve Pereira, co-author of the recently released book Flow Engineering: From Value Stream Mapping to Effective Action, and Kristen Haennel, his partner in building communities rooted in learning, collaboration, and systems thinking. But this wasn’t a typical professional gathering. Rather than a conference packed with sessions and slides, they created an immersive experience designed to bring together professionals from diverse industries to step back, reflect, and practice what it truly means to improve the flow of work.

The setting, against the remote and stunning oceanfront of the Yucatán Peninsula, wasn’t just beautiful; it was intentional. Free from the usual distractions, it created space for focused thinking, deeper conversations, and clarity that rarely emerges in day-to-day operations.

When I joined this first-ever Flow Leadership Retreat in March 2025, I expected thoughtful discussions on delivery systems, value streams, and flow. What I didn’t expect was how much the environment, the people, and the open space to think differently would shift my entire perspective on how work works.

As someone who’s spent the last 4 years advocating for Value Stream Management (VSM) and building systems that improve visibility and flow, I came into the retreat hoping to sharpen those tools. I left with refined perspectives and a renewed appreciation for the power of stepping away from execution to examine the system itself.

Flow Before Framework

On Day 1, we didn’t jump straight into diagrams or frameworks. Instead, we challenged ourselves to define what flow really means, individually and collectively. Some participants reached for physics and nature metaphors; others spoke about momentum, energy, or alignment.

And that was the point.

We explored flow not just as a metric but also as a state of system performance, psychological readiness, and sometimes a barrier caused by misalignment between intention and execution.

We examined constraints, those visible and invisible forces that slow work down. We also examined interpersonal and systemic friction as a root cause of waste and a signal for improvement.

The Power of Shared Experience

Day 2 brought stories. Coaches, consultants, and enterprise leaders shared what it’s like to bring flow practices into environments shaped by legacy processes, functional silos, and outdated metrics.

We didn’t just talk about practices. We compared scars. We discussed what happens when flow improvements stall, how leadership inertia manifests, and why psychological safety is essential to sustain improvement.

The value wasn’t in finding a single answer but in hearing how others had wrestled with the same questions from different perspectives. We found resonance in our challenges and, more importantly, in our commitment to change.

Mapping the System: Day 3 and the Five Maps

It wasn’t until Day 3 that we thoroughly walked through the Five Flow Engineering Maps. By then, we had laid the foundation through shared language and intent. The maps weren’t theoretical. They became immediate tools for diagnosing where our systems break down.

Here’s how we practiced:

  • Outcome Mapping helped us clarify what improvement meant and what we are trying to change in the system.
  • Current State Mapping exposes how work flows through the system, where it waits, and why it doesn’t arrive where or when we expect it.
  • Dependency Mapping surfaced the invisible contracts between teams, the blockers that live upstream and downstream of us.
  • Constraint Mapping allowed us to dig deeper into patterns, policies, and structures that prevent meaningful flow.
  • Flow Roadmapping helped us prioritize where to start, what to address next, and how to keep system improvement from becoming another unmeasured initiative.

We didn’t just learn to see the system. We refined our skills by applying real-world case examples to improve them.

An Environment That Made Learning Flow

The villa, tucked away on the Yucatán coast, offered more than scenery. It offered permission to slow down, think, walk away from laptops, and walk into reflection. It gave us the space to surface ideas and hold them up to the breeze as some of our Post-it notes blew away.

That environment became part of the learning. It reminded us that improving flow isn’t just about the process. It’s also about the conditions for thinking, collaborating, and creating clarity.

Final Reflections

This retreat wasn’t about doing more work. It focused on collaboration from different perspectives and experiences, understanding how work flows through our systems, and finding ways to improve it that are sustainable, practical, and measurable.

It reaffirmed something I’ve long believed:

We fix broken or inefficient systems, unlocking the full potential of our people, our products, and our performance.

I left with more than frameworks. I left with conversations I’ll be thinking about for months, new ways to approach problems I thought I understood, and the clarity that comes only when you step outside the system to study it fully.

I’m grateful for the experience and energized for what’s next.

References

  1. Pereira, S. & Davis, A. (2024). Flow Engineering: From Value Stream Mapping to Effective Action. IT Revolution Press.

Filed Under: Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Why Cutting Agile Leadership Hurts Teams More Than It Saves

March 21, 2025 by philc

8 min read

The Cost-Driven Decision That Many Companies Regret

Many organizations today are eliminating the Scrum Master or equivalent Agile leadership role, not to rebrand it but to cut costs. Instead of keeping Agile leadership as a dedicated role, they distribute its responsibilities across existing team members:

  • Engineering Managers take on Agile execution and delivery oversight.
  • Product Managers absorb backlog management, facilitation, and team coordination.
  • Team members self-manage Agile ceremonies, tracking, and reporting.

At first glance, this is a logical cost-saving move. Mature teams should be able to self-organize. But the reality is far more complicated.

In our company, we’ve seen firsthand that keeping Agile leadership as a distinct role pays off significantly more than the salary it costs.

Why We Didn’t Eliminate This Role

Like many organizations, we’ve gone through multiple Agile transformations:

  • Waterfall to WaterScrumFall – Agile sprints, but still project-driven release cycles
  • Scrum to Kanban & Flow – Shifting toward continuous delivery and flow efficiency
  • Scrum Master to Agile Leader to Agile Delivery Manager – Evolving the role encompassing Flow Metrics, Value Stream Management (VSM), Flow Engineering, and continuous optimization.

Rather than eliminate the role, we adapted it to better match how our teams and technology operate today.

We’ve moved away from Scrum and now use a Kanban flow with a few Scrum ceremonies mixed in. As a result, we changed the role name from “Agile Leader” since neither “Agile Coach” nor “Scrum Master” fit the way we work or the responsibilities of the role.

Meanwhile, our parent company, which structured product teams differently, removed the Scrum Master and QA roles, pushing those responsibilities onto Product and Engineering Managers. This team design isn’t inherently wrong, but it does fundamentally change team dynamics and, in our experience, weakens long-term effectiveness.

Our support for this role deepened when we began adopting Value Stream Management in 2020 and 2021. As we learned more about optimizing the flow of work across the system, aligning delivery to OKRs and business outcomes, and using Flow Metrics to identify bottlenecks, we made a key decision: rather than hire a value stream architect or separate value delivery lead; we assigned that accountability to the Agile Leader. That move became a turning point. The role now included facilitation, value stream management, flow engineering, and cross-system delivery health. This expansion of responsibility led us to retitle the role of Agile Delivery Manager.

The Hidden Cost of Eliminating Agile Leadership

Many companies assume Agile execution can take care of itself. But what happens?

  • Engineering Managers are already stretched managing technical leadership, hiring, mentoring, and architecture. Adding Agile execution oversight creates competing priorities.
  • Product Managers are tasked with strategy, roadmap, and customer insight. When they absorb Agile execution, their ability to drive innovation and product-market fit suffers.
  • Teams default to feature-first work without someone to balance priorities across features, tech debt, security, and defects.
  • The erosion of Agile leadership often leads to a breakdown in psychological safety, team culture, and continuous improvement. Agile leaders aren’t just facilitators but team enablers who cultivate trust, alignment, and growth.

The impact of cutting or removing the dedicated agile leader role from teams isn’t a theoretical concern. We’ve seen organizations eliminate the role and reinstate it later after delivery slowed, burnout spiked, and alignment broke down.

What Happens When Agile Leadership Is Removed?

When Agile leadership is absorbed rather than owned, teams face:

1. Increased Cognitive Load for Engineering & Product Managers

  • Engineering Managers are expected to facilitate Agile ceremonies, track team health, and optimize delivery on top of leading architecture and engineering excellence.
  • Product Managers now manage the backlog, facilitate delivery, and maintain customer alignment all at once.

2. Reduced Flow Efficiency & Team Alignment

  • Work is optimized for speed over value, with more features and fewer strategic investments in quality, sustainability, or security.
  • No one is clearly accountable for balancing work types across the system.

3. Breakdown in Agile Practices, Psychological Safety & Team Culture

  • Retrospectives lose impact without consistent facilitation.
  • Process improvements stall without clear ownership.
  • Team culture and psychological safety erode, affecting engagement, retention, and long-term execution health.

The Agile Delivery Manager: More Than a Facilitator

As our practices evolved, the role titles and responsibilities changed as well. 2025, we’re updating it from Agile Leader to Agile Delivery Manager. The Agile Delivery Manager (ADM) is more than just a renamed Scrum Master; it’s an evolved form of Agile leadership designed to ensure:

  • Agile leadership is a focused role, not something to divide among the team  
  • Flow Metrics and Value Stream Management (VSM) help improve overall system delivery  
  • Teams prioritize both feature development and maintaining system health, technical debt, security, and fixing defects  
  • Psychological safety, collaboration, and a strong culture are actively maintained

Unlike Product Managers (incentivized to deliver features) or Engineering Managers (focused on technical excellence and delivery), the ADM has no stake in any single type of work. This neutrality is essential. They provide a holistic, unbiased lens on the system, ensuring balanced Flow Distribution and healthy delivery over time. Without this role, teams prioritize visible work and short-term wins, neglecting foundational needs.

What the Experts Say About Scrum Masters and Why It Still Matters

Some well-known Agile voices have described the Scrum Master as a servant-leader, facilitator, and invisible guide:

“Great Scrum Masters don’t manage the team; they enable the team to manage themselves.” – Gunther Verheyen, author of Scrum – A Pocket Guide

“A good Scrum Master is invisible. A great Scrum Master makes the team feel like they did it themselves.” – Geoff Watts, Agile coach and author

“The role of the Scrum Master is not to ensure Scrum is implemented correctly. It’s to ensure that the team continuously improves and delivers value.” – Scrum.org Blog

“Without a dedicated Scrum Master, teams often fall back into old habits, status reporting, command-and-control, and short-term delivery over long-term health.” – Agile coach insight, echoed across retrospectives and forums

These quotes reflect the foundational role the Scrum Master plays in enabling self-managing teams, continuous improvement, and long-term value delivery.

However, the role must evolve as the team matures. When teams move beyond needing constant facilitation, the Agile leader doesn’t become unnecessary; they become more strategic. They step into a broader role: optimizing flow, supporting cross-functional alignment, stewarding system health, and driving outcome-based delivery.

Rather than disappearing, the Agile leader becomes even more critical, not as a passive servant but as a system-level enabler of delivery efficiency and value.

Lessons from Inside: Comparing Team Models

I’m fortunate to work in an organization that supports both models: teams with dedicated Agile Delivery Managers and teams with responsibilities assigned to Engineering Managers. This side-by-side comparison has been revealing. Engineering managers in teams without an ADM often struggle to juggle architectural leadership, people management, Agile ceremonies, psychological safety coaching, and flow metrics. The burden is real, and it dilutes their impact across all fronts. What gets lost is not just ceremony facilitation but sustained attention to team health, value delivery, and process evolution. These teams tend to operate reactively without a clear guide focusing on system optimization.

That said, I also recognize that some long-lived, high-performing teams have matured to the point where they can self-manage without formal Agile leadership. These teams have developed strong cultures, embedded trust, and deep internal accountability. In those environments, the absence of a dedicated ADM may not be felt day-to-day.

However, this raises an important question: Who is responsible for reporting on delivery health, aligning with outcomes, and guiding continuous optimization across the system? That’s not a critique; it’s just something worth considering.

Different Models, Different Choices

To be clear, I’m not saying one model is right and the other is wrong. I’m sharing what I’ve seen work and where things fall apart.

Different organizations, maturity levels, and team cultures will demand different approaches. But understanding the trade-offs is key. Eliminating Agile leadership may save salary dollars, but it can cost far more in lost alignment, missed improvement opportunities, and team degradation over time.

Key Responsibilities of the Agile Delivery Manager

  • Agile Leadership & Flow-Based Delivery
  • Facilitate planning, stand-ups, retrospectives, and production reviews
  • Align teams around roles and responsibilities and work across the value stream
  • Champion flow efficiency by removing bottlenecks and managing work intake
  • Foster psychological safety, trust, and continuous learning
  • Value Stream Management, Flow Engineering, and Flow Metrics Optimization
  • Lead monthly Flow Metrics reviews to help teams surface and resolve inefficiencies
  • Track Flow Time, Efficiency, Load, Velocity, and Distribution
  • Ensure investment in tech debt, security, and sustainability, not just features
  • Cross-Team Collaboration & Dependency Management
  • Align Product, Engineering, and Agile leadership
  • Coordinate across teams to manage dependencies and reduce delivery friction
  • Partner with Platform and Production Engineering teams for smoother execution

The Unicorn Problem: Why Overloading Other Roles Fails

Some argue that Product and Engineering Managers can take on these additional responsibilities, but at what cost?

The industry already struggles to fill these roles with strong candidates. When you ask one person to manage delivery flow, facilitate team dynamics, coach culture, drive Agile execution, and lead strategy, you create what I call the “unicorn problem.”

  • T-Shaped Leaders = Deep expertise in one area + a broad understanding of others
  • V-Shaped Leaders = Deep expertise in everything (engineering, Agile, customer insight, facilitation, coaching, metrics, and more)

Unicorns exist but rarely and not for long. Overloading these roles doesn’t set anyone up for success.

Should You Drop the dedicated Scrum Master or Agile Leader Role?

Most organizations still have a Scrum Master or equivalent Agile role, but some are experimenting with eliminating it in favor of shared responsibilities.

While this can work in some instances, our experience proves that a dedicated Agile leadership role improves:

  • Delivery flow efficiency
  • Business alignment
  • Sustainable team execution
  • Psychological safety and culture

So before you eliminate the role, ask: Who on your team is incentivized to prioritize delivery balance across features, tech debt, security, and defects? If no one owns that responsibility, it’s likely no one is doing it well.

Again, I’m not prescribing a one-size-fits-all answer. I’m sharing what I’ve seen in practice, from teams that struggled without this role to high-performing teams that outgrew it to the evolution of the ADM as a critical driver of system-wide value delivery. The key is clarity of purpose and accountability, no matter the model.

“Agile leaders don’t just guide their teams. They protect and improve the entire delivery system. They play a key role in ensuring its integrity and success. They are the guardians of the delivery system.” – Phil Clark

What’s your experience?

  • Has your organization eliminated this role?
  • If so, what impact has it had?
  • Should Agile execution be absorbed by Engineering and Product Managers?

Let’s keep the conversation going.

Related Posts

  • From Good to Great: Shifting to Outcomes in 2025, January 2025.
  • Beyond Facilitation: The Agile Leader’s Place in Cross-Functional Team Dynamics, February 2024.
  • Agile Software Delivery: Unlocking Your Team’s Full Potential. It’s not the Product Owner, December 2022.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Practices and Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact