• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Leadership

We Have Metrics. Now What?

May 11, 2025 by philc

7 min read

A Guide for Legacy-Minded Leaders on Using Metrics to Drive the Right Behavior

From Outputs to Outcomes

A senior executive recently asked a VP of Engineering and the Head of Architecture for industry benchmarks on Flow Metrics. At first, this seemed like a positive step, shifting the focus from individual output to team-level outcomes. However, the purpose of the request raised concerns. These benchmarks were intended to evaluate engineering managers’ performance for annual reviews and possibly bonuses.

That’s a problem. Using system-level metrics to judge individual performance is a common mistake. It might look good on paper but often hides deeper issues. This approach is for senior leaders adopting team-level metrics who want to use them effectively. You’ve chosen better metrics. Now, let’s make sure they work as intended. It risks turning system-level signals into personal scorecards, creating the dysfunction these metrics are meant to reveal and improve. Using metrics this way negates their value and invites gaming over genuine improvement.

To clarify, the executive’s team structure follows the Engineering Manager (EM) model, where EMs are responsible for the performance of the delivery team. In contrast, I support an alternative approach with autonomous teams built around team topologies. These teams include all the roles needed to deliver value, without a manager embedded in the team. These are two common but very different models of team structure and performance evaluation.

This isn’t the first time I’ve seen senior leaders misuse qualitative metrics, and it likely won’t be the last. So I asked myself: Now that more leaders have agreed to adopt the right metrics, do they know how to use them responsibly?

I will admit that I was frustrated to learn of this request, but the event inspired me to create a guide for leaders, especially those used to traditional, output-focused models who are new to Flow Metrics and team-level measurement. This article shares my approach to metrics, focusing on curiosity, care, and a learning mindset. It’s not a set of rules. You’ve already chosen team-aligned metrics, and now I’ll explain how we use them to drive improvement while avoiding the pitfalls of judgment or manipulation.

A Note on Industry Benchmarks

At the beginning of this post, the senior leader requested industry benchmarks, specifically for Flow Metrics. When benchmarks are treated as targets or internal scorecards, they can reduce transparency. Teams might focus on meeting the numbers instead of addressing challenges openly.

Benchmarks are helpful, but only when applied thoughtfully. They’re most effective at the portfolio or organizational level rather than as performance targets for individual teams. Teams differ significantly in architecture, complexity, support workload, and business focus. Comparing an infrastructure-heavy team to a greenfield product team isn’t practical or fair.

Use benchmarks to understand patterns, not to assign grades. Ask instead: “Is this team improving against their baseline? What’s helping or getting in the way?”

How to Use Team Metrics Without Breaking Trust or the System

1. Start by inviting teams into the process

  • Don’t tell them, “Flow Efficiency must go up 10%.”
  • Ask instead: “Here’s what the data shows. What’s behind this? What could we try?”

Why: Positive intent. Teams already want to improve. They’ll take ownership if you bring them into the process and give them time and space to act. Top-down mandates might push short-term results, but they usually kill long-term improvement.

2. Understand inputs vs. outputs

  • Output metrics (like Flow Time, PR throughput, or change failure rate) are results. You don’t control them directly.
  • Input metrics (like review turnaround time or number of unplanned interruptions) reflect behaviors teams can change.

Why: If you set targets on outputs, teams won’t know what to do. That’s when you get gaming or frustration. Input metrics give teams something they can improve. That’s how you get real system-level change.

I’ve been saying this for a while, and I like how Abi Noda and the DX team explain it: input vs. output metrics. It’s the same thing as leading vs. lagging indicators. Focus on what teams can influence, not just what you want to see improve.

3. Don’t turn metrics into targets

When a measure becomes a target, it stops being useful.

  • Don’t turn system health metrics into KPIs.
  • If people feel judged by a number, they’ll focus on making the number look good instead of fixing the system.

Why: You’ll get shallow progress, not real change. And you won’t know the difference because the data will look better. The cost? Lost trust, lower morale, and bad decisions.

4. Always add context

  • Depending on the situation, a 10-day Flow Time might be great or terrible.
  • Ask about the team’s product, the architecture, the kind of work they do, and how much unplanned work they handle.

Why: Numbers without context are misleading. They don’t tell the story. If you act on them without understanding what’s behind them, you’ll create the wrong incentives and fix the bad things.

5. Set targets the right way

  • Not every metric needs a goal.
  • Some should trend up; others should stay stable.
  • Don’t use blanket rules like “improve everything by 10%.”

Why: Metrics behave differently. Some take months to move. Others can be gamed easily. Think about what makes sense for that metric in that context. Real improvement takes time; chasing the wrong number can do more harm than good.

6. Tie metrics back to outcomes and the business

  • Don’t just say, “Flow Efficiency improved.” Ask, what changed?
    • Did we deliver faster?
    • Did we reduce the cost of delay?
    • Did we create customer value?

If you’ve read my other posts, I recommend tying every epic and initiative to an anticipated outcome. That mindset also applies to metrics. Don’t just look at the number. Ask what value it represents.

Also, it’s critical that teams use metrics to identify their bottlenecks. That’s the key. Real flow improvement comes from fixing the most significant constraint. If you’re improving something upstream or downstream of the bottleneck, you’re not improving flow. You’re just making things look better in one part of the system. It’s localized and often a wasted effort.

Why: If the goal is better business outcomes, you must connect what the team does with how it moves the needle. Metrics are just the starting point for that conversation.

7. Don’t track too many things

  • Stick to 3-5 input metrics at a time.
  • Make these part of retrospectives, not just leadership dashboards.

Why: Focus drives improvement. If everything is a priority, nothing is. Too many metrics dilute the team’s energy. Let them pick the right ones and go deep.

8. Build a feedback loop that works

  • Metrics are most useful when teams review them regularly.
  • Make time to reflect and adapt.

We’re still experimenting with what cadence works best. Right now, monthly retrospectives are the minimum. That gives teams short feedback loops to adjust their improvement efforts. A quarterly check-in is still helpful for zooming out. Both are valuable. We’re testing these cycles, but they give teams enough time to try, reflect, and adapt.

Why: Improvement requires learning. Dashboards don’t improve teams. Feedback does. Create a rhythm where teams can test ideas, measure progress, and shift direction.

A Word of Caution About Using Metrics for Performance Reviews

Some leaders ask, “Can I use Flow Metrics to evaluate my engineering managers?” You can, but it’s risky.

Flow Metrics tell you how the system is performing. They’re not designed to evaluate individuals. If you tie them to bonuses or promotions, you’ll likely get:

  • Teams gaming the data
  • Managers focus on optics, not problems
  • Reduced trust and openness

Why: When you make metrics part of a performance review, people stop using them for improvement. They stop learning. They play it safe. That hurts the team and the system.

Here’s what you can do instead:

In manager-led models, Engineering Managers are typically held accountable for team delivery. In cross-functional models, Agile Delivery Managers help guide improvement but don’t directly own delivery outcomes. In either case, someone helps the team improve.

That role should be evaluated, but not based on the raw numbers alone. Instead, assess how they supported improvement.

Thoughts on assessing “Guiding Team Improvement”:

Bottleneck Identification

  • Did they help surface and clarify constraints?
  • Are bottlenecks discussed and addressed

Team-Led Problem Solving

  • Did they enable experiments and reflection, not dictate fixes?

Use of Metrics for Insight, Not Pressure

  • Did they foster learning and transparency?

Facilitation of Improvement Over Time

  • Do the trends show intentional learning?

Cross-Team Alignment and Issue Escalation

  • Are they surfacing systemic issues beyond their team?

Focus on influence, not control. Assess those accountable to direct team performance improvements based on how they influence system improvements and support their teams.

  • Use metrics to guide coaching conversations, not to judge.
  • Evaluate managers based on how they improve the system and support their teams.
  • Reward experimentation, transparency, and alignment to business value.

Performance is bigger than one number. Metrics help tell the story, but they aren’t the story.

Sidebar: What if Gamification Still Improves the Metric?

I’ve heard some folks say, “I’m okay with gamification. If the number gets better, the team’s getting better.” That logic might work in the short term but breaks down over time. Here’s why:

  1. It often hides real issues.
  2. It focuses on optics instead of outcomes.
  3. It breaks feedback loops that drive learning.
  4. It leads to local, not systemic, improvement.

So, while gamification might improve the score, it doesn’t constantly improve the system and seldom as efficiently or sustainably.

If the goal is long-term performance, trust the process. Let teams learn from the data. Don’t let the number become the mission.

Metrics are just tools. If you treat them like a scoreboard, you’ll create fear. If you treat them like a flashlight, they’ll help you and your teams see what’s happening.

Don’t use metrics to judge individuals. Use them to guide conversations and, surface problems, and support improvement. That’s how you build trust and better systems.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

A Self-Guided Performance Assessment for Agile Delivery Teams

May 3, 2025 by philc

12 min read

This all started with a conversation and a question: “We do performance reviews for individuals, but what about teams?” If we care about how individuals perform, shouldn’t we also care about how teams perform together?

Why do we even work in teams?

It’s a strategic decision. In modern software delivery, teams are the core drivers of value. A strong team can achieve results far greater than what individuals can accomplish alone. How well we think and work together as a team (collective intelligence) is more impactful than the individual Performance of its members. That’s why improving team effectiveness is so important.

But what does team effectiveness enable?

  • Execution: High-performing teams work faster and better meet customer needs. They focus on the right priorities, adjust quickly, and recover more quickly when problems arise.
  • Engagement and Retention: People stay in workplaces where they feel their contributions matter, where they’re supported, and where they feel safe to share ideas. Strong teams build this kind of environment.
  • Sustainable Performance: Burnout occurs when individuals take on too much on their own. Strong teams share the workload, support one another, and collaborate to solve problems.

Many organizations still evaluate individuals on their own, individual performance assessments, overlooking their Performance within the team, their contributions, and the overall dynamics and health of the team.

So, let’s ask a better question: How well does your team work together?

  • What strengths and skills is the team using?
  • Which areas need more development or clarification?
  • How often does your team take time to review its Performance together?  
  • Do you have a system in place for gathering feedback and implementing ongoing improvements?

Just like individuals, even high-performing teams experience slumps or periods of lower Performance. Acknowledging this is the first step toward helping the team return to excellence.

This article provides a self-assessment tool to help teams evaluate their current working practices at a specific point in time. The goal isn’t to place blame or measure productivity but to spark open conversations and create clarity that leads to improvement. When teams get feedback on Performance and collaborate effectively, everything improves: delivery speed, developer satisfaction, and overall business impact.

A Reflection More Than a Framework

This isn’t a manager’s tool or a leadership scorecard. It’s a guide for teams looking to improve how they collaborate with purpose. It’s for delivery teams that value their habits just as much as their results.

Use it as a retro exercise. A quarterly reset. A mirror.

Why Team Reflection Matters

We already measure delivery performance. DORA. Flow. Developer Experience.
But those metrics don’t always answer:

  • Are we doing what we said mattered , like observability and test coverage?
  • Are we working as a team or as individuals executing in parallel?
  • Do we hold each other accountable for delivering with integrity?

This is the gap: how teams work together. This guide helps fill it , not to replace metrics, but to deepen the story they tell.

What This Is (And Isn’t)

You might ask: “Don’t SAFe, SPACE, DORA, or Flow Metrics already do this?”
Yes and no. Those frameworks are valuable. But they answer different questions:

  • DORA & Flow: How fast and stable is our delivery?
  • DX Core 4 & SPACE: How do developers feel about their work environment?
  • Maturity Models: How fully have we adopted Agile practices?
  • For organizations implementing SAFe, SAFe’s Measure and Grow evaluate enterprise agility in dimensions such as team agility, product delivery, and lean leadership.

What they don’t always show is:

  • Are we skipping discipline under pressure?
  • Do we collaborate across roles or operate in silos?
  • Are we shipping through red builds and hoping for the best?

But the question stuck with me:
Shouldn’t we do the same for teams if we hold individuals accountable for how they show up?

What follows is a framework and a conversation starter, not a mandate. It’s just something to consider because, in many organizations, teams are where the real impact (or dysfunction) lives.

Suggested Team Reflection Dimensions

You don’t need to use all twelve categories. Start with the ones that matter most to your team, or define your own. This section is designed to help teams reflect on how they work together, not just what they deliver.

But before diving into individual dimensions, start with this simple but powerful check-in.

Would We Consider Ourselves Underperforming, Performing, or High-Performing?

This question encourages self-awareness without any external judgment. The team should decide together: no scorecards, no leadership evaluations, just a shared reflection on your experience as a delivery team.

From there, explore:

  • What makes us feel that way?
    What behaviors, habits, or examples support our self-assessment?
  • What should we keep doing?
    What’s already working well that we want to protect or double down on?
  • What should we stop doing?
    What’s causing friction, waste, or misalignment?
  • What should we start doing?
    What’s missing that could improve how we operate?
  • Do we have the skills and knowledge needed to meet our work demands?

This discussion often surfaces more actionable insight than metrics alone. It grounds the assessment in the team’s shared experience and sets the tone for improvement, not judgment.

A Flexible Self-Evaluation Scorecard

While this isn’t designed as a top-down performance tool, teams can use it as a self-evaluation scorecard if they choose. The reflection tables that follow can help teams:

  • Identify where they align today: underperforming, performing, or high-performing.
  • Recognize the dimensions where they accelerate and where they have room to improve.
  • Prioritize the changes that will have the greatest impact on how they deliver.

No two teams will see the same patterns, and that’s the point. Use the guidance below not as a measurement of worth but as a compass to help your team navigate toward better outcomes together.

10-Dimension Agile Team Performance Assessment Framework

These dimensions serve as valuable tools for self-assessments, retrospectives, or leadership reviews, offering a framework to evaluate not just what teams deliver, but how effectively they perform.

  1. Execution & Ownership: Do we plan realistically, adapt when needed, and take shared responsibility for outcomes?
  2. Collaboration & Communication: Do we collaborate openly, communicate effectively, and stay aligned across roles?
  3. Flow & Efficiency: Is our work moving steadily through the system with minimal delays or waste?
  4. Code Quality & Engineering Practices: Do we apply consistent technical practices that support high-quality, sustainable code?
  5. Operational Readiness & Observability: Are we ready to monitor, support, and improve the solutions we deliver?
  6. Customer & Outcome Focus: Do we understand who we’re building for and how our work delivers real-world value?
  7. Role Clarity & Decision Making: Are roles well understood, and do we share decisions appropriately across the team?
  8. Capabilities & Growth: Do we have the skills to succeed, and are we growing individually and as a team?
  9. Data-Driven Improvement: Do we use metrics, retrospectives, and feedback to improve how we work?
  10. Business-Technical Integration: Do we balance delivery of business and customer value with investment in technical health?

These dimensions help teams focus not just on what they’re delivering but also on how their work contributes to long-term success.

Reflection Table

This sample table is a great way to start conversations. It works well for retrospectives, quarterly check-ins, or when something feels off. Each category includes a key question and signs that may indicate your team is facing challenges in that area. These can be used as a team survey as well.

Execution & Ownership
Reflection Prompts: Do we plan realistically and follow through on what we commit to? Are we updating estimates and plans as new information emerges? Do we raise blockers or risks early? Are we collectively responsible for outcomes?
Signs of Struggle: Missed or overly optimistic goals, reactive work, unclear priorities or progress, estimates are outdated or disconnected from reality, team blames others or avoids accountability when things go wrong.

Collaboration & Communication
Reflection Prompts: Do we communicate openly, show up for team events, and work well across roles? How do we share knowledge and maintain alignment?
Signs of Struggle: Silos, missed handoffs, unclear ownership, frequent miscommunication.

Flow & Efficiency
Reflection Prompts: How efficiently does work move through our system? Are we managing context switching, controlling work in progress, and minimizing delays or rework?
Signs of Struggle: Ignored bottlenecks, context switching, stale or stuck work.

Code Quality & Engineering Practices
Reflection Prompts: Do we value quality in every commit? Are testing, automation, and clean code part of our culture? Do we apply consistent practices to ensure high-quality, maintainable code?
Signs of Struggle: Bugs, manual processes, high rework, tech debt increasing.

Operational Readiness & Observability
Reflection Prompts: Can we detect, troubleshoot, and respond to issues quickly and confidently?
Signs of Struggle: No monitoring, poor alerting, users report issues before we know.

Customer & Outcome Focus
Reflection Prompts: Do we understand the “why” behind our work (the anticipated outcome)? Do we measure whether we’re delivering impact and not just features?
Signs of Struggle:
Misaligned features, lack of outcome tracking, limited feedback loops.

Role Clarity & Decision Making
Reflection Prompts: Are team roles clear to everyone on the team? Do we share decision-making across product, tech, and delivery?
Signs of Struggle: Conflicting priorities, top-down decision dominance, slow resolution.

Capabilities & Growth
Reflection Prompts: Do we have the right skills to succeed and time to improve them? Do we have the capabilities required to deliver work?
Signs of Struggle: Skill gaps, training needs ignored, dependence on specialists or other teams.

Data-Driven Improvement
Reflection Prompts: Do we use metrics, retrospectives, and feedback to improve how we work?
Signs of Struggle: Metrics ignored, retros lack follow-through, repetitive problems.

Accountability & Ownership
Reflection Prompts: Can we be counted on? Do we raise risks and take responsibility as a team? Do we take shared responsibility for our delivery and raise risks early?
Signs of Struggle: Missed deadlines, hidden blockers, avoidance of tough conversations.

Business-Technical Integration
Reflection Prompts: Are we balancing product delivery with long-term technical health and business needs?
Signs of Struggle: Short-term thinking, ignored tech debt, disconnected roadmap and architecture.

How this appears in table format:

DimensionReflection PromptsSigns of Struggle
1. Execution & OwnershipDo we plan realistically and follow through on what we commit to? Are we updating estimates and plans as new information emerges? Do we raise blockers or risks early? Are we collectively responsible for outcomes?Missed or overly optimistic goals, reactive work, unclear priorities or progress, estimates are outdated or disconnected from reality, team blames others or avoids accountability when things go wrong.
2. Collaboration & CommunicationDo we communicate openly, show up for team events, and work well across roles? How do we share knowledge and maintain alignment?Silos, missed handoffs, unclear ownership, frequent miscommunication.
3. Flow & EfficiencyHow efficiently does work move through our system? Are we managing context switching, controlling work in progress, and minimizing delays or rework?Ignored bottlenecks, context switching, stale or stuck work.
4. Code Quality & Engineering PracticesDo we value quality in every commit? Are testing, automation, and clean code part of our culture? Do we apply consistent practices to ensure high-quality, maintainable code?Bugs, manual processes, high rework, tech debt increasing.
5. Operational Readiness & ObservabilityCan we detect, troubleshoot, and respond to issues quickly and confidently?No monitoring, poor alerting, users report issues before we know.
6. Customer & Outcome FocusDo we understand the “why” behind our work (the anticipated outcome)? Do we measure whether we’re delivering impact and not just features?Misaligned features, lack of outcome tracking, limited feedback loops.
7. Role Clarity & Decision MakingAre roles clear? Do we share decision-making across product, tech, and delivery?Conflicting priorities, top-down decision dominance, slow resolution.
8. Capabilities & GrowthDo we have the right skills to succeed and time to improve them? Do we have the capabilities required to deliver work?Skill gaps, training needs ignored, dependence on specialists or other teams.
9. Data-Driven ImprovementDo we use metrics, retrospectives, and feedback to improve how we work?Metrics ignored, retros lack follow-through, repetitive problems.
10. Business-Technical IntegrationAre we balancing product delivery with long-term technical health and business needs?Short-term thinking, ignored tech debt, disconnected roadmap and architecture.

Detailed Assessment Reference

For teams looking for assessment levels, the next section breaks down each reflection category. It explains what “Not Meeting Expectations,” “Meeting Expectations,” and “Exceeding Expectations” look like in practice.

Execution & Ownership
Do we plan realistically, adapt when needed, and take shared responsibility for outcomes?

  • Not Meeting Expectations:
    No planning rhythm; commitments are missed; estimates are rarely updated; blockers are hidden.
  • Meeting Expectations:
    Team plans regularly, meets most commitments, revises estimates as needed, and raises blockers transparently.
  • Exceeding Expectations:
    Plans adapt with agility; estimates are realistic and actively managed; the team owns outcomes and proactively addresses risks.

Collaboration & Communication
Do we collaborate openly, communicate effectively, and stay aligned across roles?

  • Not Meeting Expectations: Works in silos; communication is inconsistent or unclear; knowledge isn’t shared. Team members are not attending meetings or conversations regularly.
  • Meeting Expectations: Team collaborates effectively and communicates openly across roles.
  • Exceeding Expectations: Team creates shared clarity, collaborates regularly, and actively drives alignment across all functions.

Flow & Efficiency
Is our work moving steadily through the system with minimal delays or waste?

  • Not Meeting Expectations: Work is consistently blocked or stuck; high WIP and frequent context switching slow delivery.
  • Meeting Expectations: Team manages WIP, removes blockers, and maintains steady delivery flow.
  • Exceeding Expectations: Team actively optimizes flow end-to-end; bottlenecks are identified and resolved.

Code Quality & Engineering Practices
Do we apply consistent technical practices that support high-quality, sustainable code?

  • Not Meeting Expectations: Defects are frequent; automation, testing, and refactoring are lacking.
  • Meeting Expectations: Defects are few or less frequent; code reviews and testing are standard; quality practices are regularly applied.
  • Exceeding Expectations: Quality is a shared team value; clean code, automation, and sustainable practices are embedded.

Operational Readiness & Observability
Are we ready to monitor, support, and improve the solutions we deliver?

  • Not Meeting Expectations: Monitoring is missing or insufficient; issues are discovered by users.
  • Meeting Expectations: Alerts and monitoring are in place; team learns from post-incident reviews.
  • Exceeding Expectations: Observability is proactive; issues are detected early and inform ongoing improvements.

Customer & Outcome Focus
Do we understand who we’re building for and how our work delivers real-world value?

  • Not Meeting Expectations: Work is disconnected from business goals; outcomes are not communicated or measured.
  • Meeting Expectations: Team understands customer or business impact and loosely ties delivery to anticipated outcomes and value.
  • Exceeding Expectations: Business or Customer impact drives planning and iteration; outcomes are tracked and acted upon.

Role Clarity & Decision Making
Are roles well understood, and do we share decisions appropriately across the team?

  • Not Meeting Expectations: Decision-making is top-down or unclear; prioritization is top-down; roles are overlapping or siloed.
  • Meeting Expectations: Team members understand their roles, prioritize, and make decisions collaboratively.
  • Exceeding Expectations: Teams co-own prioritization and decisions with transparency, clear tradeoffs, and joint accountability.

Capabilities & Growth
Do we have the skills to succeed, and are we growing individually and as a team?

  • Not Meeting Expectations: Skill gaps persist; team lacks growth opportunities or training support.
  • Meeting Expectations: The team has the right skills for current work and seeks help when needed.
  • Exceeding Expectations: Team proactively builds new capabilities, shares knowledge, and adapts to new challenges.

Data-Driven Improvement
Do we use metrics, retrospectives, and feedback to improve how we work?

  • Not Meeting Expectations: Feedback is anecdotal; metrics are not understood or ignored or unused in retrospectives.
  • Meeting Expectations: Team uses metrics and feedback to inform improvements regularly.
  • Exceeding Expectations: Metrics drive learning, experimentation, and meaningful change.

Business-Technical Integration
Do we balance delivery of business and customer value with investment in technical health?

  • Not Meeting Expectations: Technical health is ignored or sidelined in favor of speed and features.
  • Meeting Expectations: Product and engineering collaborate on both business value and technical needs.
  • Exceeding Expectations: Long-term technical health and business alignment are integrated into delivery decisions.

How this appears in table format:

10-Dimension Agile Team Performance Assessment Framework (3-Point Scale)

DimensionNot Meeting ExpectationsMeeting ExpectationsExceeding Expectations
1. Execution & OwnershipNo planning rhythm; missed commitments; outdated estimates; blockers hidden.Regular planning; estimates revised; blockers raised transparently.Plans adapt with agility; estimates are managed; team owns outcomes and addresses risks proactively.
2. Collaboration & CommunicationSiloed work; unclear communication; knowledge hoarded.Open, cross-role communication; knowledge shared.Team drives shared clarity and proactive alignment with others.
3. Flow & EfficiencyWork stalls; high WIP; frequent context switching.Steady flow; WIP managed; blockers removed.Flow optimized across the system; bottlenecks surfaced and resolved quickly.
4. Code Quality & EngineeringFrequent defects; minimal automation; unmanaged tech debt.Testing and reviews in place; debt tracked.Clean, sustainable code is a team norm; quality and automation prioritized.
5. Operational ReadinessMonitoring lacking; users detect issues.Monitoring and alerting in place; incident reviews occur.Team detects issues early; observability drives proactive improvement.
6. Customer & Outcome FocusLittle connection to business value or user needs.Team aware of goals; some outcome alignment.Delivery prioritized around customer value; outcomes measured and iterated on.
7. Role Clarity & Decision MakingRoles unclear; top-down decisions.Roles defined; collaborative decision-making.Shared decision ownership; tradeoffs transparent and understood.
8. Capabilities & GrowthSkill gaps ignored; no focus on development.Right skills in place; asks for help when needed.Team proactively grows skills; cross-training and adaptability are norms.
9. Data-Driven ImprovementMetrics ignored; retros repetitive or shallow.Data and feedback used in team improvement.Metrics and feedback drive learning and meaningful change.
10. Business-Technical IntegrationTechnical health neglected; short-term focused.Business and tech needs discussed and planned.Business outcomes and technical resilience are co-prioritized in delivery.

The assessment is meant to start conversations. Use it as a guide, not a strict scoring system, and revisit them as your team grows and changes. High-performing teams regularly reflect as part of their routine, not just occasionally.

How to Use This and Who Should Be Involved

This framework isn’t only a performance review. It’s a reflection tool designed for teams to assess themselves, clarify their goals, and identify areas for growth.

Here’s how to make it work:

1. Run It as a Team

Use this framework during retrospectives, quarterly check-ins, or after a major delivery milestone. Let the team lead the conversation. They’re closest to the work and best equipped to evaluate how things feel.

The goal isn’t to assign grades. It’s to pause, align, and ask: How are we doing?

2. Make It Yours

There’s no need to use all ten dimensions. Start with the ones that resonate most. You can rename them, add new ones, or redefine what “exceeding expectations” look like in your context.

The more it reflects your team’s values and language, the more powerful the reflection becomes.

3. Use Metrics to Support the Story, Not Replace It

Delivery data like DORA, Flow Metrics, or Developer Experience scores can add perspective. But they should inform, not replace the conversation. Numbers are helpful, but they don’t speak for how it feels to deliver work together. Let data enrich the dialogue, not dictate it.

4. Invite Broader Perspectives

Some teams can gather anonymous 360° feedback from stakeholders or adjacent teams surfacing blind spots and validate internal perceptions.

Agile Coaches or Delivery Leads can also bring an outside-in view, helping the team see patterns over time, connecting the dots across metrics and behaviors, and guiding deeper reflection. Their role isn’t to evaluate but to support growth.

5. Let the Team Decide Where They Stand

As part of the assessment, ask the team:
Would we consider ourselves underperforming, performing, or high-performing?Then explore:

  • What makes us feel that way?
  • What should we keep doing?
  • What should we stop doing?
  • What should we start doing?

These questions give the framework meaning. It turns observation into insight and insight into action.

This Is About Ownership, Not Oversight

This reflection guide and its 10 dimensions can serve as a performance management tool, but I strongly recommend using it as a check-in resource for teams. It’s designed to build trust, encourage honest conversations, and offer a clear snapshot of the team’s current state. When used intentionally, it enhances team cohesion and strengthens overall capability. For leaders, focusing on recurring themes rather than individual scores reveals valuable patterns that can inform coaching efforts rather than impose control. Adopting it is in your hands and your team’s.

Final Thoughts

This all started with a conversation and a question: “We do performance reviews for individuals, but what about teams?” If we care about how individuals perform, shouldn’t we also care about how teams perform together?

High-performing teams don’t happen by accident. They succeed by focusing on both what they deliver and how they deliver it.

High-performing teams don’t just meet deadlines, they adapt, assess themselves, and improve together. This framework provides them with a starting point to make that happen.

I’ll create a Google Form with these dimensions, using a 3-point Likert scale for our teams to fill out.


Related Articles

If you found this helpful, here are a few related articles that explore the thinking behind this framework:

  • From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics
  • Navigating the Digital Product Workflow Metrics Landscape: From DORA to Comprehensive Value Stream Management Platform Solutions

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

How Value Stream Management and Product Operating Models Complement Each Other

April 27, 2025 by philc

7 min read

“The future of software delivery isn’t about process versus structure; it’s about harmonizing both to deliver better, faster, and smarter.”

Next month, I am invited to meet with a senior leader from a large organization, who is also a respected industry figure, to discuss the Product Operating Model. I initially saw it as a good opportunity to prepare and share insights. Instead, it sparked an important realization.

In late 2020, I introduced Value Stream Management (VSM) to our organization, initiating the integration process in 2021. At the time, this marked the beginning of my understanding of VSM and our first attempt to implement it. Since then, we’ve gained more profound insights and valuable lessons, allowing us to refine our approach.

Recently, when asked about Value Stream Management (VSM), I explained that it helps make our Agile, Lean, and DevOps investments visible.
Now, with our VSM 1.5 approach, I highlight that it also makes our investments in Agile, Lean, DevOps, OKRs, and Outcomes more transparent.

Today, we are evolving our Value Stream Management (VSM) practices into what we now call VSM 1.5 (assuming we started at 0.9 or 1.0).

We took a more logical approach to redefining our Value Streams and aligning teams. We’ve also improved how we focus on metrics and hold discussions while requiring the anticipated outcomes of each Initiative or Epic to be documented in Jira. I outlined a strategy for leveraging team-level OKRs to align with broader business outcomes. I’ve also briefly touched on this concept in a few other articles.

As I prepared for this upcoming meeting, I came to a surprising realization:

We weren’t just implementing Value Stream Management, we were organically integrating Product Operating Model (POM) principles alongside it.

It wasn’t planned initially, but it’s now clear we weren’t choosing between two models. We were combining them, which became the foundation for our next level of operational maturity. This evolution reflects our commitment to continuously improving and aligning our methodologies to deliver greater customer and business impact.

Value Stream Management and the Product Operating Model

In software engineering, a value stream refers to the steps and activities involved in delivering a product or service to the customer. Value Stream Management (VSM) is the practice of optimizing this flow to improve speed, quality, and customer value.

A Product Operating Model (POM) serves as the blueprint for how a company designs, builds, and delivers software products. It ensures that teams, processes, and investments are aligned to maximize the customer’s value, driven by clear anticipated outcomes.

At first glance, Value Stream Management and the Product Operating Model are separate approaches, each with its terminology and focus. But when you look deeper, they share the same fundamental spirit: ensuring that our work creates meaningful value for customers and the business.

Despite this shared purpose, their emphasis differs slightly:

  • VSM focuses primarily on optimizing the flow of work, identifying bottlenecks, improving efficiency, and making work visible from idea to customer impact.
  • POM focuses on structuring teams and organizing ways of working, ensuring that ownership, funding, and decision-making are aligned to achieve clear, outcome-driven goals.

Together, they are not competing models but complementary disciplines: one sharpening how work flows, the other sharpening how teams are structured to deliver purposeful outcomes.

The key difference is where they start:

  • VSM starts with flow efficiency and system visibility.
  • POM starts with structure and ownership of the business outcome.

Why Combining POM and VSM Creates a Stronger Operating Model

Structure without optimized flow risks bureaucracy and stagnation.

Flow optimization without clear ownership and purpose risks fragmentation and, worse, the acceleration of delivering the wrong things faster.

Without aligning structure and flow to meaningful business and customer outcomes, organizations may become highly efficient at producing outputs that ultimately fail to drive real value.

Together, they provide what modern digital organizations need:

  • Product Operating Model (POM): Clear ownership, accountability, and alignment to expected business and customer outcomes.
  • Value Stream Management (VSM): Optimized, visible, and continuously improving flow of work across the organization.
  • Both combined: A complete operating model that structures teams around value and ensures that value flows efficiently to the customer.

When combined, POM and VSM offer a holistic view, structuring teams with purpose and optimizing how that purpose is realized through efficient delivery.

Industry Research: Reinforcing the Shift Toward Outcomes
Recent research reinforces the importance of this convergence. Planview’s 2024 Project to Product State of the Industry Report 1 found that elite-performing organizations are three times more likely to use cascading OKRs and measure success through business outcomes rather than output metrics. They are also twice as likely to regularly review Flow Metrics, confirming that outcome-driven practices combined with flow efficiency are becoming the new standard for high-performing organizations.

“Structure gives us ownership. Flow gives us visibility. Outcomes give us purpose. The strongest organizations master all three.”

Our Journey: VSM 1.5 as a Harmonization of POM and VSM

As we’ve matured our approach, it’s become clear that many of the practices we are implementing through VSM 1.5 closely align with the core principles of the Product Operating Model:

  • Clear Value Stream Identity:
    Using Domain-Driven Design (DDD) to define real business domains mirrors POM’s emphasis on persistent product boundaries.
  • Outcome Ownership:
    Mandating anticipated and actual outcomes aligns directly with POM’s shift from measuring outputs to business impacts.
  • Cross-functional Accountability:
    Structuring teams around value streams, not just skills or departments mirrors the cross-functional empowerment central to POM.
  • Flow Visibility and Metrics:
    Monitoring flow efficiency, team health, and quality reflects VSM’s original intent and POM’s focus on systemic improvement.
  • Customer-Centric Thinking:
    Closing the loop to validate outcomes ensures that teams remain connected to customer value, not just internal delivery milestones.

In short, without realizing it at first, VSM 1.5 evolved into a model that harmonizes the structural clarity of the Product Operating Model with the operational discipline of Value Stream Management.

Recognizing Our Current Gaps

While VSM 1.5 represents a significant step forward, it is not the final destination. There are important areas where we are still evolving:

  • Mid-Level OKR Development: While we have mandated anticipated outcomes at the initiative level, consistently translating these into clear, mid-level OKRs and connecting team efforts explicitly to business outcomes remains a work in progress. Strengthening this bridge will be critical to our long-term success.
  • Funding by Product/Value Stream: Today, our funding models still follow more traditional structures. Based on my experience across the industry, evolving to product-based funding will require a longer-term cultural shift. However, we are laying the necessary foundation by focusing on outcome-driven initiatives, clear value stream ownership, and understanding the investment value of teams.

These gaps are not signs of failure. They prove we are building the muscle memory needed to achieve lasting, meaningful change.

The Practical Benefits We Are Seeing and Expect to See

  • Stronger alignment between Product, Architecture, and Delivery.
  • Reduced cognitive load for teams working within clear domain boundaries.
  • Clearer prioritization, alignment, and purpose based on customer and business value.
  • A cultural shift toward accountability not just for delivery but for results.
  • Faster, better-informed decisions from improved visibility and flow insights.
  • Sustained operational efficiency improvements through retrospectives, insights, and continuous experimentation.

Something to Think About for Leaders

If you’re leading digital transformation, don’t limit yourself to choosing a Product Operating Model or Value Stream Management.

The real transformation happens when you intentionally combine both:

  • Structure teams around customer and business value.
  • Optimize how work flows through those teams.
  • Hold teams accountable not just for delivery but for real, measurable outcomes.
  • Continuously learn and improve by leveraging data insights and closing the feedback loop.

The future of software delivery isn’t about process versus structure. It’s about harmonizing both to deliver better, faster, and smarter.

What We’ve Been Building

Preparing for this meeting has helped crystallize what we’ve been building: a modern operating model that combines ownership, flow, and outcomes, putting customer and business value at the center of everything we do.

While our journey continues, and some cultural shifts are still ahead, we have built the foundation for a more outcome-driven, operationally efficient, and scalable future.

While there’s still work to be done and cultural changes ahead, we’ve laid the groundwork for a future that is more focused on outcomes, efficient in operations, and ability to scale.

I’m looking forward to the upcoming conversation, which will walk through the Product Operating Model, learn from their approach, and explore how it aligns with, replaces, or complements our evolution with Value Stream Management. It’s a conversation about methods and how organizations are shifting from tracking outputs to delivering actual business impact.

Let’s keep the conversation going:
How is your organization evolving its operating model to drive outcomes over outputs, combining structure, flow, and purpose to create real value?

Related Articles

  1. From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable, April 12, 2025. Phil Clark.

References

  1. The 2024 Project to Product State of the Industry Report. Planview.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Product Delivery, Software Engineering

From Scrum Master to Agile Delivery Manager: Evolution in the Age of Flow

April 14, 2025 by philc

6 min read

This post was inspired by a LinkedIn post shared by Dave Westgarth.

In 2025, we formally changed the title of Scrum Master to Agile Delivery Manager (ADM) in our technology division. This renaming wasn’t a rebrand for the sake of optics. It reflected a deeper evolution already happening, rooted in the expanding scope of delivery leadership, the adoption of Flow Metrics and Value Stream Management, and our real-world shift from strict Scrum toward a more customized Kanban-based model.

It was this year that the name finally clicked. After assigning Value Stream Architect responsibilities to our Scrum Masters and giving them ownership of delivery metrics, team-level delivery health, and collaboration across roles within their Agile team, I realized the title “Scrum Master” no longer fit their role. I even considered Agile Value Stream Manager, but it felt too narrow and platform-specific.

That’s when Agile Delivery Manager stood out, not only as a better label but also as a more accurate reflection of the mindset and mission.

I’m not alone in this. My wife, a Scrum Master, noticed a rise in Agile Delivery Manager roles. These roles are emerging as a natural evolution of the Scrum Master role, broader in scope but still grounded in servant leadership and Agile values. This shift is becoming more common across industries.

Why We Made the Change

This wasn’t an overnight decision—it was the culmination of years of observing the gap between traditional agile roles and modern delivery demands. I’ve written extensively about the evolving nature of delivery roles in the modern product and engineering ecosystem. In “Navigating the Digital Product Workflow Metrics Landscape,” I highlighted how organizations that have matured beyond Agile 101 practices shift their attention upstream toward value creation, flow efficiency, and business impact.

In that article, I shared:

“Organizations that have invested in high automation, eliminated waste, and accelerated CI/CD cycles are now shifting left—seeking broader visibility from idea to operation.”
– Navigating the Digital Product Workflow Metrics Landscape

Similarly, in “Dependencies Are Here to Stay,” I discussed why frameworks couldn’t box delivery leadership in:

“We can’t measure agility in isolation. Dependencies are part of the system, not a failure of it. Leadership roles must evolve to manage flow across those dependencies, not just within a team board.”

This evolution is what our former Scrum Masters were doing. They were coaching teams and guiding delivery conversations, navigating delivery risks, managing stakeholder expectations, and tracking systemic flow. The title needed to grow with the responsibility.

The Agile Role That Connects It All

Agile leadership roles and responsibilities vary across organizations. Some have Scrum Masters or Agile Leaders, while others use titles like Technical Project Manager or Agile Coach. In some cases, responsibilities shift to Engineering or Product Managers, and some companies distribute these duties among team members and eliminate the role entirely. Despite these differences, we believe a dedicated Agile leadership position is valuable. This role plays a key part in improving team performance, delivery efficiency, and optimizing workflows.


The Agile Delivery Manager role is unique in that it is the only role on the team not incentivized by a specific type of work.

  • Product Managers focus on growth and prioritize new features.
  • Technical Leads concentrate on architecture and managing technical debt.
  • Information Security leaders work to reduce security risks.
  • QA teams ensure defects are identified and fixed.

The Agile Delivery Manager operates at a higher level, overseeing workflow across the distribution of work types, including features, technical debt, risks, and defects. It fosters continuous team improvement while ensuring that deliveries consistently drive tangible business value.

Inside the Agile Delivery Manager Role

It’s worth clarifying: In our model, Agile Delivery Managers remain focused on their assigned Agile team or teams. While the title may sound broader, the role is not intended to operate across multiple delivery teams or coordinate program-level work. Instead, ADMs guide and improve the delivery flow within their own team context—coaching the team, optimizing its workflow, and partnering with product and engineering to ensure value is delivered efficiently.

Here’s how we now define the Agile Delivery Manager in our updated job description:

“As an Agile Delivery Manager, you’ll lead strategic transformation, champion Flow Metrics and VSM, and shape how teams deliver real business value.”

Key responsibilities include:

  • Agile Leadership & Flow-Based Delivery
    Coaching teams while enabling clarity, cadence, and sustainability in customized Kanban-style systems.
  • Team Collaboration & Dependency Management
    Collaborating with Product, QA, InfoSec, and Engineering roles within the team to resolve blockers, ensure quality, and maintain delivery flow.
  • Flow Metrics & Value Stream Optimization
    Leading metric reviews using Flow Time, Load, Efficiency, and Distribution to drive better delivery outcomes.
  • Value Stream Architecture
    Acting as system-level delivery architects, not of code, but of how work flows from concept to value.
  • Strategic Reporting & Outcome Alignment
    Building quarterly delivery reports that tie execution to business value, supporting leadership visibility and continuous improvement.

This role no longer fits the narrow scope that Scrum once offered. It combines delivery leadership, agile stewardship, and flow optimization.

What This Means for Scrum Masters

If you’re a Scrum Master wondering what’s next, you’re not alone. You’re likely doing many things, but this role demands time to widen the lens.

As Dave Westgarth shared on LinkedIn:

“You’re using the same core competencies: facilitation, servant leadership, coaching, and team empowerment. They just get applied at different levels and from different perspectives.”

This evolution isn’t about abandoning Agile. It’s about scaling its intent.

Many of our ADM team members still value their strong Scrum foundation. However, they’ve broadened their focus to improve delivery efficiency, enhance team coordination, manage delivery risks, and ensure smooth team workflows across competing work types and stakeholder needs.

If you’re already guiding delivery beyond team ceremonies, influencing system flow, and navigating complexity, this evolution is your next chapter.

Final Thoughts

The shift to an Agile Delivery Manager reflects a modern reality: frameworks alone don’t scale agility; people do. The ADM role honors the coaching mindset of the Scrum Master while embracing the delivery complexities of today’s hybrid, platform-heavy, and outcome-driven organizations.

For our division, the name change signaled to our teams and business stakeholders that delivery leadership had evolved. More importantly, it gave our people permission to grow into that evolution.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

Beyond Frameworks: The Real Weight of Leading Transformation at Scale

April 14, 2025 by philc

9 min read

A leadership case study for those carrying the weight of transformation, when the change is working, but the friction won’t quit.

This isn’t about criticizing an organization; it’s about honoring the complexity of leading through change, even in highly successful environments.

Transformation fatigue: Is there light at the end of the tunnel? Yes, but just as things begin to brighten, change strikes again, and the path might grow dim once more.

This article was sparked by a quiet but revealing moment: a leader hesitated to define expected outcomes in Jira1. It reminded me that transformation fatigue doesn’t come from stalled progress but from something slower and more dangerous: the erosion of alignment when leadership philosophies diverge over time.

This article connects with thoughts shared by Willem-Jan Ageling, whose work I came across shortly after drafting this piece. Ageling highlights that team autonomy can only succeed when leadership supports it with genuine trust, demonstrated through actions, not just words. Building on that, I want to ask: What happens when the right frameworks are in place, the transformation progresses, yet trust begins to erode? Not because of outright failure but due to ongoing, subtle friction at scale.

You may know the feeling if you’re a senior leader navigating Agile, DevOps, or product transformation. The structures shift. The frameworks are adopted. But the mindset? That’s where the real work lives.

I’m incredibly proud of our organization’s transformation over the past decade. It’s not just a success story; it’s industry-leading in many respects. We’ve gone from legacy delivery models to empowered, product-focused teams aligned around value. We’ve modernized our technology stack, redefined our operating model, and adopted practices many organizations still strive to implement.

But let me be clear: I didn’t say perfect.

Transformation is not a box you check. It’s a system you nurture. And it’s a mindset you must defend, especially as leadership shifts, ownership changes, and misalignment creeps in.

One of the hardest lessons I’ve learned is that not everyone on the journey is aligned on the destination or how to get there. Some leaders have been at my side for years, yet comments or decisions occasionally reveal a superficial sense of agreement rather than true shared understanding. And those moments? They’re not setbacks. They’re opportunities to rethink, reconnect, and improve how we deliver work together.

This article isn’t a playbook. It’s a reflection. A case study. My own.

Willem-Jan Ageling recently wrote about the importance of trust in team autonomy. His article, Team autonomy only works when leadership shows trust 2, highlighted how quickly things can go off track when a leader reverts to control or bypasses key Agile roles. I see this all the time, not just in isolated teams, but in entire systems. One of the hardest parts of leading transformation is defending that trust across layers of leadership, especially when new leaders join the organization carrying different philosophies. Trust doesn’t scale automatically. Alignment doesn’t hold itself. And fatigue? It rarely comes from a lack of progress. It comes from the constant effort of holding it all together.

It’s what happens when the transformation is fundamental, yet the friction remains.

If you’ve felt that weight, you’re not alone.

Transformation doesn’t end. It evolves.

As organizations grow, so does the complexity of sustaining alignment. When you’re small, a startup, or a few hundred people, it’s easier to rally around shared goals, maintain tight communication loops, and stay close to your delivery model. But as headcount scales, layers are added, and teams diversify, the fatigue risk rises.

We aim for growth, but it’s a mixed blessing; it magnifies your strengths and weaknesses.

Fatigue is no longer isolated to individuals or pockets of teams. It becomes systemic when leadership philosophies diverge, alignment fades, or superficial agreement masks deeper disconnects.

Over the past decade, I’ve helped lead our organization from waterfall delivery to modern, empowered, product-focused teams. We’ve adopted Agile, DevOps, Lean, and Value Stream Management. We’ve moved from outputs to outcomes. We’ve rearchitected our application and modernized our platform.

And we’ve made real progress.

But no framework prepares you for the repetition, the re-explaining, and the relitigation of decisions you thought were long settled.

New executives arrive. Stakeholders change. Strategic direction pivots.

Fatigue doesn’t come from the frameworks but from the effort required to protect them when leadership philosophies keep shifting.

When done correctly, the effort doesn’t end. Transformation is not a one-time project; it’s a continuous journey and a mindset of leadership.

Even years in, the friction returns

By 2020–2021, we were six years into our transformation and hitting our stride. Then, leadership changed. A new technology leader arrived with a more hierarchical approach to Agile, rooted in functional oversight and centralized control. It wasn’t wrong; it was simply misaligned with the autonomous, cross-functional team structure we had built, grounded in Team Topologies 3, Team of Teams 4, and Turn the Ship Around! 5.

Where we embedded all roles necessary to deliver value in one team, this leader expected delivery to be driven by an Engineering Manager-led model, one where the EM managed both delivery and people. Both models are valid, but they are fundamentally different philosophies.

Around the same time, our private equity firm introduced the idea of tracking individual productivity units, a shift back toward legacy thinking like lines of code and activity-based metrics.

Fortunately, I had already introduced Value Stream Management and Flow Metrics, which emphasize outcomes, not output, especially not at the individual level.

We educated. We realigned. We defended the system.

We succeeded. But it was exhausting.

I’ve been that legacy leader

Earlier in my career, I led the traditional way: resource plans, Gantt charts, and command and control. Even as I started reading Agile literature and implementing new ceremonies, I hadn’t changed my thinking. I was doing Agile but not leading through it.

My fundamental shift came during a quiet moment of clarity when I realized I was in the way. That moment was the precursor to Rethink Your Understanding, not just a phrase but a mindset I committed to living and leading through. It’s been my compass ever since.

Resistance doesn’t always yell, it nods

The hardest resistance I’ve faced hasn’t been loud. It’s been polite, strategic, and sometimes even supportive on the surface.

One of the longest-running tensions came from a senior product leader I respect for product decisions. He believed in strong direction and centralized control. I believe in empowerment and team ownership.

He would express agreement in executive sessions, but the structures remained top-down. Product managers were not empowered. Roadmaps were handed out rather than co-created. And teams, even years into our transformation, still hadn’t been trained in Agile principles.

Not wrong, just not aligned.

And the cost? Quiet drag. Misunderstood roles. Fatigue.

A moment that made it clear

After our acquisition and the departure of our former CEO, I asked that same colleague for his thoughts on how our division’s executive team might change, a team I’ve been part of for the past few years.

“We’ll be focused on operations. We’ll bring in some senior managers from the business. I’m not sure this is the best use of your time.”

That moment hit hard, but it wasn’t personal. It was clarifying.

He still didn’t see engineering as strategic, and he still didn’t view my technology leadership as part of operational decision-making.

That’s when I realized that fatigue doesn’t come from open disagreement but from the illusion of alignment.

I’ve been writing this story for years

Many of my articles have tried to name this tension:

  • Mindsets That Shape Software Delivery Team Structures
  • Avoiding Flow Metric Confusion
  • Agile Era Leadership: Overcoming Legacy Leadership Friction and Four Industry Conversations

These weren’t rants. They were reflections, a way to process what it means to lead inside a transforming organization, even when not everyone is transforming with it.

Post-acquisition, two paths emerge

Today, I report to a senior leadership team that believes in transformation through a different model. They emphasize Engineering Managers embedded within teams, hands-on principal-level leadership, and individually oriented career frameworks built quickly based on experience.

It’s not a bad model. It’s simply different from ours, focusing on cross-functional autonomy, long-term capability building, and outcome orientation over individual output.

Neither approach is wrong. But this team operates from very different assumptions.

And reconciling them, that’s where the fatigue returns.

Industry conversations keep me grounded

Outside the walls of my org, I don’t need to explain why value streams matter or why DevOps is more than automation.

When I connect with other leaders at conferences or through advisory boards, I am reminded that I am not alone.

These conversations bring clarity, encouragement, and strength when the internal friction gets heavy.

And yes, sometimes I want to be right

Inside my team, we joke about “Phil Fridays,” when my conviction tends to spike after a week of hard conversations…

It’s not about ego. It’s about care.

I want to build the best teams on the field.

I want to give people purpose, clarity, and ownership.

I want to lead in a way that leaves systems better than I found them.

Others feel the same. And that’s why this isn’t about who’s right or wrong.

It’s about alignment and the emotional toll when it’s missing.

Agile isn’t failing. Leadership is

You’ve heard it: “Agile is failing.” “DevOps didn’t deliver.”

But it’s not the frameworks that fail; it’s how they’re implemented and, more specifically, how they’re led.

When Agile is used to mask command and control, or DevOps becomes just a reporting layer, don’t blame the model. In most cases, blame leadership, blame the mindset.

Leading transformation means choosing clarity, again and again

Top 5 Triggers of Leadership Friction

  1. Leadership turnover or strategic pivots that deprioritize transformation values.
  2. Conflicting ownership philosophies (e.g., empowerment vs. control).
  3. Introduction of metrics or standards that contradict autonomy.
  4. Rhetorical alignment masking structural or behavioral misalignment.
  5. Organizational scaling that stretches philosophical consistency.

As organizations scale, the stakes grow higher. Alignment becomes harder, and systems become more complex. That means transformation fatigue doesn’t just linger; it compounds. What once felt like a collaborative push for change at a smaller scale can start to feel like a grind as your influence spans more teams, departments, and philosophies.

Growth is a sign of success but also magnifies misalignment if we’re not actively checking for it. It’s not just the number of people that changes; it’s the number of assumptions.

Here’s what I’ve learned:

  • When you sign up to lead transformation, you’re not signing up for a framework. You’re signing up for a lifetime of rethinking your beliefs and inviting others to do the same.
  • You’re signing up for fatigue, not because you’re weak, but because the work is real.
  • You’re signing up for friction, not because people are bad, but because philosophies differ.
  • You’re signing up for progress, not perfection.

And if you’re still showing up, holding the line, listening, and learning while advocating for the path you’re leading.

And that’s the work.

Let’s be transparent and honest and stop pretending we’re aligned when we’re not. For those leading transformation: Don’t confuse alignment with agreement. Keep asking. Keep listening. Keep showing up.


References

  1. Clark, Phil (April 12, 2025). From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable. rethinkyourunderstanding.com.
  2. Ageling, Willem-Jan (April 06, 2025). Team autonomy only works when leadership shows trust. https://medium.com/@WJAgeling/team-autonomy-only-works-when-leadership-shows-trust-2ab59182f350.
  3. Skelton, M & Pais, M. (2019). Team Topologies: Organizing Business and Technology Teams for Fast Flow. IT Revolution Press.
  4. McChristal, S. (General) & Collins, T. & Silverman, D. & Fussell, C. (2015). Team of Teams: New Rules of Engagement for a Complex World. Portfolio.
  5. Marquet, David L. (2015). Turn the Ship Around! Penguin publishing.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable

April 12, 2025 by philc

9 min read

Connect the dots: Show how engineering efforts drive business impact by linking their work to key organization metrics and outcomes. Highlight their value and contribution to overall success.

When Sarcasm Reveals Misalignment

Last week, one of my Agile Delivery Leaders brought forward a concern from her team that spoke volumes, not just about an individual, but about the kind of tension that quietly persists in even the most mature organizations.

She asked her team to define the expected outcome for a new Jira Epic, a practice I’ve asked all teams to adopt to ensure software investments align with business goals. However, it seems they struggled to identify the anticipated outcome. On top of that, a senior team member who’s been part of our transformation for years dismissed the idea instead of contributing to the discussion. She found herself in a difficult position, torn between the leader’s authority and her own responsibilities. He commented something like:

“Why are we doing this? This is stupid. Phil read another book, and suddenly we’re all expected to jump on board.”

When I first heard that comment secondhand, I felt a wave of anger; it struck me as pure arrogance. This leader chose not to share his perspective with me directly, perhaps for reasons he deemed valid. But as I thought more about it, I realized it wasn’t arrogance at all, but ignorance. Not malicious ignorance, but the kind that comes from discomfort, uncertainty, or an unwillingness to admit they no longer understand or align with where things are going. Comments like that are often defense mechanisms. They mask deeper resistance, reveal a lack of clarity, or quietly question whether someone still fits into a system evolving beyond their comfort zone.

This wasn’t about rejecting change or progress; it was about pushing back against how we’re evolving. Moments like this remind us that true transformation isn’t just about forging ahead; it’s about fostering belief and alignment in mindset and actions as we move forward.

Purpose-Driven Development: My Approach to Sustainable Alignment

I asked teams to define anticipated outcomes not to add overhead but to protect the integrity of the way we build software.

Over the past decade, I’ve worked hard to lead engineering our teams and organization out of the “feature factory” trap, where the focus is on output volume, velocity, and shipping for the sake of shipping. Through that experience, I developed Purpose-Driven Development (PDD), my definition of this term.

Purpose-driven development might sound like a buzzword, but it’s how we bring Agile and Lean principles to life. It ensures delivery teams aren’t just writing code; they’re solving the right problems for the right reasons with clear goals and intentions.

PDD is built on one core idea: every initiative, epic, and sprint should be based on a clear understanding of why it matters.

Anticipated Outcomes: A Small Practice That Changes Everything

To embed this philosophy into our day-to-day work, we introduced a simple yet powerful practice:

Every Epic or Initiative must include an “Anticipated Outcome.”

Just a sentence or two that answers:

  • What are we hoping to achieve by doing this work?
  • How will it impact the customer, the Business, or the platform?

We don’t expect perfection. We expect intention. The goal isn’t to guarantee results but to anchor the work in a hypothesis that can be revisited, challenged, or learned from.

This simple shift creates:

  • Greater alignment between teams and strategy
  • More meaningful prioritization
  • Opportunities to reflect on outcomes, not just outputs
  • Visibility across leadership into what we’re investing in

Who Might Push Back and Why That’s Okay

When we ask teams to define anticipated outcomes, it’s not about creating friction; it’s about creating focus. And this shouldn’t feel like a burden to most of the team.

I believe engineers will welcome it. Whether they realize it at first or not, this clarity gives them purpose. It ties their daily work to something that matters beyond code.

The only two roles I truly expect might feel frustration when asked to define anticipated outcomes are:

Product Managers and Technical Leaders.

And even that frustration? It’s understandable.

Product Managers often experience pain from not being involved early enough in the ideation or problem-definition stage. They may not know the anticipated outcome if they’re handed priorities from a higher-level product team without the context or autonomy to shape the solution. And that’s the problem, not the question itself, but the absence of trust and inclusion upstream.

For Technical Leaders, it often comes when advocating for tech debt work. They know the system needs investment but struggle to translate that into a clear business reason. I get it; it’s frustrating when you know the consequences of letting entropy creep in, but you haven’t been taught to describe that impact in terms of business value, customer experience, or system performance.

But that’s exactly why this practice matters.

Asking for an anticipated outcome isn’t a punishment. It’s an exercise in alignment and clarity. And if that exercise surfaces frustration, that’s not failure. It’s the first step toward better decision-making and stronger cross-functional trust.

Whether it’s advocating for feature delivery or tech sustainability, we can’t afford to work in a vacuum. Every initiative, whether shiny and new or buried in system debt, must have a reason and a result we’re aiming for.

Anticipated Outcomes First, But OKR Alignment Is the Future

When I introduced the practice of documenting anticipated outcomes in every Epic or Initiative, I also asked for something more ambitious: a new field in our templates to capture the parent OKR or Key Result driving the work.

The goal was simple but powerful:

If we claim to be an outcome-driven organization, we should know what outcome we’re aiming for and where it fits in our broader strategy.

I aimed to help teams recognize that their Initiatives or Epics could serve as team-level Key Results directly tied to overarching business objectives. After all, this work doesn’t appear by chance. It’s being prioritized by Product, Operations, or the broader Business for a deliberate purpose: to drive progress and advance the company’s goals.

But when I brought this to our Agile leadership group, the response was clear: this was too much to push simultaneously.

Some teams didn’t know the parent KR, and some initiatives weren’t tied to a clearly articulated OKR. Our organizational OKR structure was often incomplete, and we were missing the connective tissue between top-level objectives and team-level execution.

And they were right.

We’re still maturing in how we connect strategy to delivery. For many teams, asking for the anticipated outcome and the parent OKR at once felt like a burden, not a bridge.

So, we paused the push for now. My focus remains first on helping teams articulate the anticipated outcome. That alone is a leap forward. As we strengthen that muscle, I’ll help connect the dots upward, mapping team efforts to the business outcomes they drive, even if we don’t have the complete OKR infrastructure yet.

Alignment starts with clarity. And right now, clarity begins with purpose.

Without an anticipated outcome, every initiative is a dart thrown in the dark.

It might land somewhere useful or waste weeks of productivity on something that doesn’t matter.

Documenting the outcome gives us clarity and direction. It means we’re making strategic moves, not random ones. And it reduces the risk of high-output teams being incredibly productive… at the wrong thing.

Introducing the Feature Factory Ratio

To strengthen our focus on PDD and prioritize outcomes over outputs, we are introducing a new core insights metric as part of our internal diagnostics:

Feature Factory Ratio (FFR) =

(Number of Initiatives or Epics without Anticipated Outcomes / Total Number of Initiatives or Epics) × 100

The higher the ratio, the greater the risk of operating like a feature factory, moving fast but potentially delivering little that matters.

The lower the ratio, the more confident we can be that our teams are connecting their work to value.

This ratio isn’t about micromanagement, it’s about organizational awareness. It tells us where alignment is breaking down and where we may need to revisit how we communicate the “why” behind our work.

Why We Call It the Feature Factory Ratio

When I introduced this metric, I considered several other names:

  • Outcome Alignment Ratio – Clear and descriptive, but lacking urgency
  • Clarity of Purpose Index – Insightful, but a bit abstract
  • Value Connection Metric – Emphasizes intent, but sounds like another analytics KPI

Each option framed the idea well, but they didn’t hit the nerve I wanted to expose.

Ultimately, I chose the Feature Factory Ratio because it speaks directly to the cultural pattern we’re trying to break.

It’s provocative by design. It challenges teams and leaders to ask, “Are we doing valuable work or just shipping features?” It turns an abstract concept into a visible metric and surfaces conversations we must have when our delivery drifts from our strategy.

Sometimes, naming things with impact helps us lead the behavior change that softer language can’t.

Sidebar: Superficial Alignment, The Silent Threat

One of the biggest leadership challenges in digital transformation isn’t open resistance, it’s superficial alignment.

These senior leaders attend the workshops, adopt the lingo, and show up to the town halls, but when asked to change how they work or lead, they bristle. They revert. They roll their eyes or make sarcastic comments.

But they’re really saying: I’m not sure I believe in this, or I don’t know how I fit anymore.

The danger is: superficial alignment looks like progress, but it blocks true transformation. It creates cultural drag. It confuses teams and weakens momentum.

Moments like the one I shared remind me that transformation isn’t a checkbox but a leadership posture. And sometimes, those sarcastic comments? They’re your clearest sign of where real work still needs to happen.

Start Where You Are and Grow from There

We’re all at different points in our transformation journeys as individuals, teams, and organizations.

So, instead of reacting with frustration when someone can’t articulate an outcome or when a snide remark surfaces resistance, use it as a signal.

Meet your team where they are. Use every gap as a learning opportunity, not a leadership failure.

If a team can’t answer “What’s the anticipated outcome?” today, help them start asking it anyway. The point isn’t to have every answer right now. It’s to build the muscle so that someday, we will.

These questions aren’t meant to judge where we are. They’re meant to guide us toward where we’re trying to go, and this is the Work of Modern Software Leadership.

It’s easy to say we want to be outcome-driven. Embedding that belief into daily practice is harder, especially when senior voices or legacy habits push back.

But this is the work:

  • Aligning delivery to strategy
  • Teaching teams to think in terms of impact
  • Holding the line on purpose—even when it’s uncomfortable
  • Measuring not just what we ship but why we’re shipping it

Yes, I’ve read my fair share of books. Along the way, I’ve experienced key moments and expected outcomes that influenced my journey in adopting new initiatives within our division and organization, such as Value Stream Management and understanding what it means to deliver real value. I’ve led teams through transformation and seen what works. From my experience in our organization and working with other industry leaders, I’ve learned that software delivery with a clear purpose is more effective, empowering, and valuable for the Business, our customers, and the teams doing the work.


Leader’s Checklist: Outcome Alignment in Agile Teams

Use this checklist to guide your teams and yourself toward delivering work that matters.

1. Intent Before Execution

  • Is every Epic or Initiative anchored with a clear Anticipated Outcome?
  • Have we stated why this work matters to the customer, business, or platform?
  • Are we avoiding the trap of “just delivering features” without a defined end state?

2. Strategic Connection

  • Can this work be informally or explicitly tied to a higher-level Key Result, business goal, or product metric?
  • Are we comfortable asking, “What is the business driver behind this work?” even if it’s not written down yet?

3. Team-Level Awareness

  • Do developers, QA, and designers understand the purpose behind what they’re building?
  • Can the team articulate what success looks like beyond “we delivered it”?

4. Product Owner Empowerment

  • Has the Product Manager or Product Owner been involved in problem framing, or were they handed a solution from above?
  • Is that a signal of upstream misalignment if they seem disconnected from the outcome?

5. Tech Debt with Purpose

  • If the work is tech debt, have we articulated its impact on system reliability, scalability, or risk?
  • Can we tie this work back to customer experience, transaction volume, or long-term business performance?

6. Measurement & Reflection

  • Are we tracking how many Initiatives or Epics lack anticipated outcomes using the Feature Factory Ratio?
  • Do we ever reflect on anticipated vs. actual outcomes once work is delivered?

7. Cultural Leadership

  • Are we reinforcing that asking, “What’s the anticipated outcome?” is about focus, not control?
  • When we face resistance or discomfort, are we leading with curiosity instead of compliance?

Remember:

Clarity is a leadership responsibility.

If your teams don’t know why they’re doing the work, the real problem is upstream, not them.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact