• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Lean

Leading Through the AI Hype in R&D

July 27, 2025 by philc

7 min read

Note: AI is evolving rapidly, transforming workflows faster than expected. Most of us can’t predict how quickly or to what level AI will change our teams or workflow. My focus for this post is on the current state, pace of change, and the reality vs hype at the enterprise level. I promote the adoption of AI and encourage every team member to embrace it.

I’ve spent the past few weeks deeply immersed in “vibe coding” and experimenting with agentic AI tools during my nights and weekends, learning how specialized agents can orchestrate like real product teams when given proper context and structure. But in my day job as a senior technology leader, the tone shifts. I’ve found myself in increasingly chaotic meetings with senior leaders, chief technology officers, chief product officers, and engineering VPs, all trying to out-expert each other on the transformative power of AI on product and development (R&D) teams.

The energy often feels like a pitch room, not a boardroom. Someone declares Agile obsolete. Another suggests we can replace six engineers with AI agents. A few toss around claims of “30× productivity.” I listen, sometimes fascinated, often frustrated, at how quickly the conversation jumps to conclusions without asking the right questions. More troubling, many of these executives are under real pressure from investors and ownership to show ROI. If $1M is spent on AI adoption, how do we justify the return? What metrics will we use to report back?

Hearing the Hype (and Feeling the Exhaustion)

One executive confidently declared, “Agile and Lean are dead,” citing the rise of autonomous AI agents that can plan, code, test, and deploy without human guidance. His opinion echoed a recent blog post, Agile Is Dead: Long Live Agentic Development, which criticized Agile rituals like daily stand-ups and sprints as outdated and encouraged teams to let agents take over the workflow¹. Meanwhile, agile coaches argue that bad Agile, not Agile itself, is the real problem, and that AI can strengthen Agile if applied thoughtfully.

The hype escalates when someone shares stories of high-output engineering from one of the senior developers, keeping up with AI capabilities: 70 AI-assisted commits in a single night, barely touching the keyboard. Another proposes shrinking an 8-person team to just two engineers, one writing prompts and one overseeing quality, as the AI agents do the rest. These stories are becoming increasingly common, especially as research suggests that AI can dramatically reduce the number of engineers needed for many projects². Elad Gil even claimed most engineering teams could shrink by 5×–10×.

But these same reports caution against drawing premature conclusions. They warn that while AI enables productivity gains, smaller teams risk creating knowledge silos, reduced quality, and overloading the remaining developers². Other sources echo this risk: Software Engineering Intelligence (SEI) tools have flagged increased fragility and reduced clarity in AI-generated code when review practices and documentation are lacking³.

What If We’re Already Measuring the Right Things?

While executives debate whether Agile is dead, I find myself thinking: we already have the tools to measure AI’s impact, we just need to use them.

In my organization’s division, we’ve spent years developing a software delivery metrics strategy centered on Value Stream Management, Flow Metrics, and team sentiment. These metrics already show how work flows through the system, from idea to implementation to value. They include:

  • Flow metrics like distribution, throughput, time, efficiency, and load
  • Quality indicators like change failure rate and security defect rate
  • Sentiment and engagement data from team surveys
  • Outcome-oriented metrics like anticipated outcomes and goal (OKR) alignment

Recently, I aligned our Flow Metrics with the DX Core 4 Framework⁴ matrix, organizing them into four key categories: speed, effectiveness, quality, and impact. We made these visual and accessible, using this clear chart to show how each metric relates to delivery health. These metrics don’t assume Agile is obsolete or that AI is the solution. They track how effectively our teams are delivering value.

So when senior leaders asked, “How will we measure AI’s impact?” I reminded them, we already are. If AI helps us move faster, we’ll see it in flow time. If it increases capacity, we’ll see it in throughput (flow velocity). If it maintains or improves quality, our defect rates and sentiment scores will reflect that. The same value stream lens that shows us where work gets stuck will also reveal whether AI helps us unstick it.

Building on Existing Metrics: The AI Measurement Framework

Instead of creating an entirely new system, I layered an existing AI Measurement Framework on top of our existing performance metrics⁵. This format includes three categories:

  1. Utilization:
    • % of AI-generated code
    • % of developers using AI tools
    • Frequency of AI-agent use per task
  2. Impact:
    • Changes in flow metrics (faster cycle time)
    • Developer satisfaction or frustration
    • Delivered value per team or engineer
  3. Cost:
    • Time saved vs. licensing and premium token cost
    • Net benefit of AI subscriptions or infrastructure

This approach answers the following questions: Are developers using AI tools? Does that usage make a measurable difference? And does the difference justify the investment?

In a recent leadership meeting, someone asked, “What percentage of our engineers are using AI to check in code?” That’s an adoption metric, not a performance one. Others have asked whether we can measure AI-generated commits per engineer to report to the board. While technically feasible with specific developer tools, this approach risks reinforcing vanity metrics that prioritize motion over value. Without impact and ROI metrics, adoption alone can lead to gaming behavior, and teams might flood the system with low-value tasks to appear “AI productive.” What matters is whether AI is helping us delivery better, faster, and smarter.

I also recommend avoiding vanity metrics, such as lines of code or commits. These often mislead leaders into equating motion with value. Many vendors boast “AI wrote 50% of our code,” but as developer-experience researcher Laura Tacho explains, this usually counts accepted suggestions, not whether the code was modified, deleted, or even deployed.⁵ We must stay focused on outcomes, not outputs.

The Risk of Turning AI into a Headcount Strategy

One of the more concerning trends I’m seeing is the concept of “headcount conversion,” which involves reducing team size and utilizing the savings to fund enterprise AI licenses. If seven people can be replaced by two and an AI license, along with a premium token budget, some executives argue, then AI “pays for itself.” However, this assumes that AI can truly replace human capability and that the work will maintain its quality, context, and business value.

That might be true for narrow, repeatable tasks, or small organizations or startups struggling with costs and revenue. But it’s dangerous to generalize. AI doesn’t hold tribal knowledge, coach junior teammates, or understand long-term trade-offs. It’s not responsible for cultural dynamics, systemic thinking, or ethical decisions.

Instead of shrinking teams, we should consider expanding capacity. AI can help us do more with the same people. Developer productivity research indicates that engineers typically reinvest AI-enabled time savings into refactoring, enhancing test coverage, and implementing cross-team improvements², which compounds over time into stronger, more resilient software.

Slowing Down to Go Fast

Leaving those leadership meetings, I felt a mix of energy and exhaustion. Many people wanted to appear intelligent, but few were asking thoughtful questions. We were racing toward solutions without clarifying what problem we were solving or how we’d measure success.

So here’s my suggestion: Let’s slow down. Let’s agree on how we’ll track the impact of AI investments. Let’s integrate those measurements into systems we already trust. And let’s stop treating AI as a replacement for frameworks that still work; instead, let’s use it as a powerful tool that helps us deliver better, faster, and with more intention.

AI isn’t a framework. It’s an accelerator. And like any accelerator, it’s only valuable if we’re steering in the right direction.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Leschorn, J. (2025, May 29). Agile Is Dead: Long Live Agentic Development. Superwise. https://superwise.ai/blog/agile-is-dead-long-live-agentic-development/
  2. Ameenza, A. (2025, April 15). The New Minimum Viable Team: How AI Is Shrinking Software Development Teams. https://anshadameenza.com/blog/technology/ai-small-teams-software-development-revolution/
  3. Circei, A. (2025, March 13). Measuring AI in Engineering: What Leaders Need to Know About Productivity, Risk and ROI. Waydev. https://waydev.co/ai-in-engineering-productivity-risk-roi/
  4. Saunders, M. (2025, January 6). DX Unveils New Framework for Measuring Developer Productivity. InfoQ. https://www.infoq.com/news/2025/01/dx-core-4-framework/
  5. GetDX. (2025). Measuring AI Code Assistants and Agents. DX Research. https://getdx.com/research/measuring-ai-code-assistants-and-agents/

Filed Under: Agile, AI, Delivering Value, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

When Team Structure Collides with Role Alignment

May 26, 2025 by philc

How Merging Engineering Models Can Disrupt What Works And What to Do About It

11 min read

After a recent merger, I was asked to advise an engineering organization that needed to align two very different delivery models.

One part of the organization used small, long-term, cross-functional teams with distributed leadership (self-managed). The other followed a traditional Engineering Manager (EM) model, where one manager handled people, delivery, and agile practices. The company wanted to unify job responsibilities, eliminate performance ambiguity, and ensure fair development opportunities across all teams. The executive leader of the larger organization articulated a clear vision: one company with a single, thoughtfully designed career path built on a foundation of care and respect.

These are worthy goals. I’ve helped lead engineering through nine acquisitions and know firsthand the importance of consistent titles and expectations. But I’ve also learned something else:
“Aligning job titles and responsibilities without fixing team design, architecture, role responsibilities, and delivery structure doesn’t solve the real issues. It just hides them and creates tension and career friction across the division.”

It’s not about being right. It’s about being aligned.

Alignment takes time, planning, and honest conversation.

I’m aligned with the executive leader’s vision: to unify as one company with a shared career path, achieved with care, not urgency. Whether that takes six months or a year and a half, the focus should be on clarity and collaboration, rather than speed.

The real challenge isn’t just structural, it’s cultural. Within the larger organization’s strong-willed leadership team, they have not worked within a self-managed team structure. Fixed perspectives can stall progress if we don’t create space to explore why the models differ, not just how they do. We need to identify the root causes of the structural divergence and assess the potential risks to team culture, autonomy, and product alignment, particularly for high-performing, self-managed teams.

Another point of the executive leader is that integration shouldn’t be imposed; it should evolve at the pace of shared understanding. Once we reach that point, we owe it to the teams to communicate with clarity before information leaks and assumptions or uncertainty take hold. The real challenge arose from the other senior leaders within the group. I won’t say which model is better, as it depends on the context. Instead this article explores the challenges that can occur when we centralize accountability and responsibilities without considering the unique context. It also looks at how well-meaning integration efforts can unintentionally disrupt high-performing teams.

Why This Matters: Fairness vs. Fit

After a merger or acquisition, it’s natural and smart for engineering leaders to unify role definitions, career paths, and performance frameworks. Inconsistent job titles and responsibilities across similar roles can create confusion, slow promotions, and introduce bias. If two managers hold the same title but lead very different types of teams, performance expectations become subjective. That’s not fair to them or to the engineers they support.

So, I understood the goals of the integration effort:

  • Establish unified job responsibilities across teams
  • Minimize churn, ensuring no team member feels alienated or unsupported during the transition
  • And maintain high-performing teams that can support product delivery and operational efficiency

The goals weren’t the problem. The real challenge was the implementation.

How can you use a shared career framework when team structures and responsibilities differ?

The difference in team design and responsibilities is where the challenges of friction and finding solutions began to emerge.

Two Team Models in Contrast

The Engineering Manager Model

In the parent or acquiring organization’s Engineering Manager (EM)- led structure, a single person is responsible for managing people, overseeing delivery, driving agile practices, and partnering with products. EMs are accountable for both team output and individual performance and development. In many cases, they also serve as the technical lead.

Each Engineering Manager (EM) typically works directly with a team of 6-10 softwar engineers. The team does not have a Scrum Master or Agile Coach; the EM is responsible for Agile accountability. Similarly, there is no dedicated QA team member, so quality accountability falls on the EM and the software engineers.

This EM model was framed as a version of the “Iron Triad” or “Iron Triangle,” centered on Engineering, Product, and (presumably) UX or Delivery. However, in practice, the Engineering Manager often became the default source of team process, performance, and planning.

This structure isn’t inherently wrong. It works best when:

  • Teams are large and need strong coordination
  • The architecture is monolithic or tightly coupled
  • Product and engineering require direct managerial alignment

However, when scaled broadly or applied without nuance, it can quickly lead to role overload and reliance on individuals rather than systems to drive outcomes.

The Self-Managed Cross-Functional Model

The smaller teams in the acquired organization followed a different model entirely. These were long-lived, cross-functional teams of 8 to 12 people, including 2-4 software engineers, 1 QA, 1 product manager/owner, and in many cases, agile delivery leads or scrum masters. They had everything they needed to deliver software without needing to coordinate with other teams in most cases.

In this structure:

  • Responsibilities are distributed across roles instead of consolidated under a single leader.
  • Engineering Managers exist—but act primarily as career coaches and mentors, not team leads.
  • Agile delivery is facilitated by dedicated Scrum Masters or Agile Leaders embedded in the team.
  • Managers typically oversee 5 to 7 engineers across multiple teams and contribute technically as ICs when appropriate.

These teams naturally align with micro services, subdomains, or product value streams. They work well when the architecture allows for autonomy, and the organization invests in clarity, trust, and lightweight governance.

The acquired organization structured its teams to align with clear architectural boundaries, with each team focused on a specific subdomain or service. This approach made the teams both cross-functional and architecturally cohesive, reflecting Conway’s Law by ensuring the team structure matched the design of the software.

Key Difference: Accountability Consolidation

Both models contain the same essential responsibilities: engineering, product collaboration, quality, and delivery. However, in one, accountability is centralized under a manager, while in the other, it is distributed across the team.

The solution isn’t just about structure. It’s about how tightly the team model mirrors the system it’s building.

Conway’s Law tells us that our software systems mirror our organizational communication structures. When architecture is monolithic or tightly integrated, it makes sense to have centralized accountability. But when architecture is modular and service-oriented, teams that map directly to system boundaries, are small, autonomous, and aligned to subdomains can accelerate delivery and reduce coordination overhead.

And structure doesn’t just affect outcomes, it shapes culture.

In centralized models, decision-making authority and responsibility often rest with the Engineering Manager. This can bring clarity, especially for early-career engineers or less mature teams. But it can also reduce autonomy or create learned dependence, where teams hesitate to act without explicit approval.

In distributed models, autonomy is expected, and with it, psychological safety becomes critical. Teams must feel trusted to make decisions, fail safely, and adjust course without manager intervention. When done well, this fosters ownership and speed. However, without strong role clarity, trust, and support systems can lead to confusion or misalignment.

So, while the surface question is, “What does the Engineering Manager own?” the deeper question is, “Does the team structure support the system architecture and the culture you want to build?”

Where It Breaks: Role Titles vs. Role Expectations

On paper, this integration effort was about consistency: standardizing job titles, aligning role definitions, and applying a shared career framework across teams.

In practice, that consistency masked a deeper misalignment: the same title, Engineering Manager, carried very different expectations depending on the model it came from.

In the Engineering Manager-led model:

  • The EM is accountable for people leadership, delivery, agile practice, team velocity, and technical direction.
  • There is no embedded Scrum Master or Agile Coach.
  • The EM is expected to own outcomes, from sprint or iteration health to individual growth to team throughput.

In the self-managed, cross-functional model:

  • The EM is a career manager and mentor, often contributing technically as a senior IC.
  • Agile facilitation is handled by a dedicated team member (e.g., Scrum Master, Agile Leader, Agile Delivery Manager).
  • Delivery ownership and accountability are shared across the team; no single role “owns” performance.

From the outside, both are “Engineering Managers.” But their responsibilities are fundamentally different. When performance reviews, promotion criteria, and development paths are built around the broader EM model, it disadvantages leaders from the self-managed structure or forces the organization to reshape successful teams just to fit the title.

The concern is that unifying role definitions without accounting for structural context can cause real harm.

That harm doesn’t just affect managers. It ripples through teams.

In EM-led models, where one person is accountable for delivery, agile practice, and performance metrics, teams often defer decisions upward, even when they have the skills and context to act. This dynamic can unintentionally train teams to wait for approval, eroding autonomy and making collaboration feel more performative than empowered.

By contrast, long-lived, self-managed teams tend to develop strong psychological safety over time. With clear boundaries and shared ownership, they solve problems together. However, when leadership begins redefining responsibilities around titles instead of how the team works, even these teams can start to hesitate.

Autonomy suffers not because self-managed models lack structure but because outside systems try to reimpose control where clarity already exists.

The friction isn’t theoretical. It appears in performance evaluations, hiring misalignment, and career planning confusion. Eventually, it reaches the team level where roles blur, ownership is second-guessed, and the structure that supported speed and trust begins to unravel.

Legacy Thinking and Structural Blind Spots

One of the biggest challenges in transformations like this isn’t technical. It’s cultural.

I’ve seen firsthand how legacy thinking, even well-meaning thinking, can shape decisions in ways that unintentionally resist growth. During this engagement, I saw it again.

In our initial conversation regarding team structures, an executive leader for the larger organization made the strategic decision:

“We’re not going to shift 40 teams to the self-managed model. It’s too resource-intensive. The smaller teams will need to align with our Engineering Manager model.”

In a follow-up conversation that I wasn’t part of, a VP from the larger organization said:

“I’ve been using the Engineering Manager model for most of my career. It works.”

These statements weren’t malicious. They were confident, experienced, and full of certainty.

Relying too much on past success can sometimes prevent us from seeing what fits the current situation. What worked earlier in your career or in a different system might not work now. True transformation requires more than confidence. It requires curiosity.

In yet another conversation, I heard secondhand that one of these same leaders, after our first meeting on the topic, asked:

“Has Phil ever been a software engineer?”

That question stuck with me because I wondered how my interest in how software is delivered equates to my technical expertise. While the leader challenged my background (all he had to do was look at my LinkedIn profile or ask for my resume), his comment revealed a mindset: If someone doesn’t share our experience, maybe their perspective doesn’t count.

These moments aren’t about ego. They’re about reflection, about recognizing how deeply personal experience can cloud structural objectivity. When leaders dismiss unfamiliar models because they don’t match their playbook, they don’t just reject ideas. They limit what the organization is allowed to become.

“Great leaders aren’t defined by how long they’ve done something. They’re defined by how often they’re willing to rethink it.”

What Self-Managed Teams Need to Work

To be clear, I’m not arguing that self-managed, cross-functional teams are inherently better. They only work when they’re supported intentionally.

In this case, the acquired teams didn’t stumble into autonomy. They evolved, shaped by architectural changes, growing product complexity, and deliberate investment in role clarity and delivery practices.

Self-managed teams work best when:

  • Team boundaries are aligned with system boundaries (Conway’s Law in action)
  • Each team has all the roles it needs to deliver independently: product, UX, engineering, QA, agile leadership
  • Leadership trusts the team to make decisions and solve problems
  • There are clear expectations for ownership, accountability, and feedback loops
  • The organization invests in agile coaching and systems thinking, not just delivery metrics

Autonomy is powerful, but it’s not a substitute for structure. It’s a different structure, distributed rather than centralized, but no less rigorous.

When organizations assume self-managed teams can succeed without support, they fail. But when they try to control teams that already have what they need to succeed, they risk breaking what’s working.

If you dismantle a working model to standardize roles without investing in the conditions that made those teams successful, you’re not gaining alignment; you’re sacrificing outcomes.

I see the challenge of finding the right hybrid solution, either in role responsibilities or team structure, during this transition. Only time will tell how these efforts turn out.

A Path Forward

While we started the conversation about picking one model over the other, the next set of conversations should be about understanding what each one needs to succeed and recognizing what might be lost by trying to force one to fit the other’s framework.

In this transition, I’m not advocating for a reversal of the decision. The leadership team has chosen the Engineering Manager model as the long-term structure. My role is to support that transition in a way that minimizes disruption, preserves what’s working, and honors the intent behind the change.

But that doesn’t mean copying a model wholesale. It means asking harder questions:

  • Can we implement the EM model without breaking value stream alignment or team autonomy?
  • Can we support delivery accountability without assigning an EM to every team if doing so fragments the architecture or inflates management layers?
  • Can we evolve role definitions to respect the existing strengths of self-managed teams instead of stripping them out?

I’ve noticed that the most effective organizations aren’t strict about sticking to rigid structures. Instead, they focus on designs that are fit for purpose.

Consider blending elements of both models:

  • Some teams may have embedded EMs; others may operate with distributed leadership and shared delivery ownership.
  • Agile responsibilities can be flexibly assigned based on team maturity, not hierarchy.
  • Career frameworks can accommodate different types of Engineering Managers as long as expectations are clear and fair and performance is measured in context.

You don’t need to choose between alignment and autonomy.

You need to design for both, based on the work, the system, and the people you have.

It isn’t easy; sometimes, a hybrid model might not scale perfectly. However, it’s often a better option than forcing consistency, which can harm results.

Final Reflection: Fit Over Familiarity

At the heart of this transition is a challenge I’ve seen a few times:

How do you unify an organization without undoing what’s already working?

The desire to standardize roles, expectations, and performance frameworks comes from a good place. But when titles are aligned without understanding the structural and cultural context that surrounds them, friction follows, quiet at first, then louder over time.

I’ve spent years helping engineering organizations navigate these types of changes, sometimes from the inside, sometimes as an advisor. And here’s what I’ve learned:

  • Job titles are not the problem, misaligned expectations are.
  • Structure should reflect system architecture, not management tradition.
  • Psychological safety and autonomy aren’t side effects of good teams, they’re preconditions for them.
  • Legacy success can cloud future-fit decisions, especially when we assume what worked before must work again.
  • Great teams thrive in models that are clear, intentional, and well-supported, whether they are EM-led or self-managed.

There is no perfect model. But there is such a thing as the right model for the moment, the product, and the architecture.

This integration effort isn’t just a structural change, it’s a chance to define what kind of engineering organization this will become.

If we stay curious, focus on outcomes, and respect the conditions that made teams effective to begin with, we can build a unified system that enables scale without sacrificing flow, clarity, or trust.

The outcome of this effort will depend on time and attitudes.

Key Takeaways

  • The EM and self-managed models are not interchangeable. Each comes with different responsibilities, accountability structures, and cultural implications.
  • Standardizing job titles without context can create unintended harm. Especially when one title represents two very different sets of expectations.
  • Misalignment erodes autonomy and psychological safety. Teams work best when they know where decisions live, and are trusted to make them.
  • Conway’s Law still applies. If team structure doesn’t mirror system architecture, coordination costs increase and ownership suffers.
  • A hybrid approach may be necessary. Especially in the short term, where context, maturity, and system constraints vary across teams.
  • You can support a transition while still protecting what works. Integration doesn’t have to mean erasure.

In the end, our goal is to establish clear and unified job responsibilities across teams, minimize churn, and ensure that no team member feels alienated or unsupported during the transition. We aim to build high-performing teams that can deliver on existing commitments while maintaining operational efficiency.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Engineering, Leadership, Lean, Product Delivery, Software Engineering, Value Stream Management

We Have Metrics. Now What?

May 11, 2025 by philc

7 min read

A Guide for Legacy-Minded Leaders on Using Metrics to Drive the Right Behavior

From Outputs to Outcomes

A senior executive recently asked a VP of Engineering and the Head of Architecture for industry benchmarks on Flow Metrics. At first, this seemed like a positive step, shifting the focus from individual output to team-level outcomes. However, the purpose of the request raised concerns. These benchmarks were intended to evaluate engineering managers’ performance for annual reviews and possibly bonuses.

That’s a problem. Using system-level metrics to judge individual performance is a common mistake. It might look good on paper but often hides deeper issues. This approach is for senior leaders adopting team-level metrics who want to use them effectively. You’ve chosen better metrics. Now, let’s make sure they work as intended. It risks turning system-level signals into personal scorecards, creating the dysfunction these metrics are meant to reveal and improve. Using metrics this way negates their value and invites gaming over genuine improvement.

To clarify, the executive’s team structure follows the Engineering Manager (EM) model, where EMs are responsible for the performance of the delivery team. In contrast, I support an alternative approach with autonomous teams built around team topologies. These teams include all the roles needed to deliver value, without a manager embedded in the team. These are two common but very different models of team structure and performance evaluation.

This isn’t the first time I’ve seen senior leaders misuse qualitative metrics, and it likely won’t be the last. So I asked myself: Now that more leaders have agreed to adopt the right metrics, do they know how to use them responsibly?

I will admit that I was frustrated to learn of this request, but the event inspired me to create a guide for leaders, especially those used to traditional, output-focused models who are new to Flow Metrics and team-level measurement. This article shares my approach to metrics, focusing on curiosity, care, and a learning mindset. It’s not a set of rules. You’ve already chosen team-aligned metrics, and now I’ll explain how we use them to drive improvement while avoiding the pitfalls of judgment or manipulation.

A Note on Industry Benchmarks

At the beginning of this post, the senior leader requested industry benchmarks, specifically for Flow Metrics. When benchmarks are treated as targets or internal scorecards, they can reduce transparency. Teams might focus on meeting the numbers instead of addressing challenges openly.

Benchmarks are helpful, but only when applied thoughtfully. They’re most effective at the portfolio or organizational level rather than as performance targets for individual teams. Teams differ significantly in architecture, complexity, support workload, and business focus. Comparing an infrastructure-heavy team to a greenfield product team isn’t practical or fair.

Use benchmarks to understand patterns, not to assign grades. Ask instead: “Is this team improving against their baseline? What’s helping or getting in the way?”

How to Use Team Metrics Without Breaking Trust or the System

1. Start by inviting teams into the process

  • Don’t tell them, “Flow Efficiency must go up 10%.”
  • Ask instead: “Here’s what the data shows. What’s behind this? What could we try?”

Why: Positive intent. Teams already want to improve. They’ll take ownership if you bring them into the process and give them time and space to act. Top-down mandates might push short-term results, but they usually kill long-term improvement.

2. Understand inputs vs. outputs

  • Output metrics (like Flow Time, PR throughput, or change failure rate) are results. You don’t control them directly.
  • Input metrics (like review turnaround time or number of unplanned interruptions) reflect behaviors teams can change.

Why: If you set targets on outputs, teams won’t know what to do. That’s when you get gaming or frustration. Input metrics give teams something they can improve. That’s how you get real system-level change.

I’ve been saying this for a while, and I like how Abi Noda and the DX team explain it: input vs. output metrics. It’s the same thing as leading vs. lagging indicators. Focus on what teams can influence, not just what you want to see improve.

3. Don’t turn metrics into targets

When a measure becomes a target, it stops being useful.

  • Don’t turn system health metrics into KPIs.
  • If people feel judged by a number, they’ll focus on making the number look good instead of fixing the system.

Why: You’ll get shallow progress, not real change. And you won’t know the difference because the data will look better. The cost? Lost trust, lower morale, and bad decisions.

4. Always add context

  • Depending on the situation, a 10-day Flow Time might be great or terrible.
  • Ask about the team’s product, the architecture, the kind of work they do, and how much unplanned work they handle.

Why: Numbers without context are misleading. They don’t tell the story. If you act on them without understanding what’s behind them, you’ll create the wrong incentives and fix the bad things.

5. Set targets the right way

  • Not every metric needs a goal.
  • Some should trend up; others should stay stable.
  • Don’t use blanket rules like “improve everything by 10%.”

Why: Metrics behave differently. Some take months to move. Others can be gamed easily. Think about what makes sense for that metric in that context. Real improvement takes time; chasing the wrong number can do more harm than good.

6. Tie metrics back to outcomes and the business

  • Don’t just say, “Flow Efficiency improved.” Ask, what changed?
    • Did we deliver faster?
    • Did we reduce the cost of delay?
    • Did we create customer value?

If you’ve read my other posts, I recommend tying every epic and initiative to an anticipated outcome. That mindset also applies to metrics. Don’t just look at the number. Ask what value it represents.

Also, it’s critical that teams use metrics to identify their bottlenecks. That’s the key. Real flow improvement comes from fixing the most significant constraint. If you’re improving something upstream or downstream of the bottleneck, you’re not improving flow. You’re just making things look better in one part of the system. It’s localized and often a wasted effort.

Why: If the goal is better business outcomes, you must connect what the team does with how it moves the needle. Metrics are just the starting point for that conversation.

7. Don’t track too many things

  • Stick to 3-5 input metrics at a time.
  • Make these part of retrospectives, not just leadership dashboards.

Why: Focus drives improvement. If everything is a priority, nothing is. Too many metrics dilute the team’s energy. Let them pick the right ones and go deep.

8. Build a feedback loop that works

  • Metrics are most useful when teams review them regularly.
  • Make time to reflect and adapt.

We’re still experimenting with what cadence works best. Right now, monthly retrospectives are the minimum. That gives teams short feedback loops to adjust their improvement efforts. A quarterly check-in is still helpful for zooming out. Both are valuable. We’re testing these cycles, but they give teams enough time to try, reflect, and adapt.

Why: Improvement requires learning. Dashboards don’t improve teams. Feedback does. Create a rhythm where teams can test ideas, measure progress, and shift direction.

A Word of Caution About Using Metrics for Performance Reviews

Some leaders ask, “Can I use Flow Metrics to evaluate my engineering managers?” You can, but it’s risky.

Flow Metrics tell you how the system is performing. They’re not designed to evaluate individuals. If you tie them to bonuses or promotions, you’ll likely get:

  • Teams gaming the data
  • Managers focus on optics, not problems
  • Reduced trust and openness

Why: When you make metrics part of a performance review, people stop using them for improvement. They stop learning. They play it safe. That hurts the team and the system.

Here’s what you can do instead:

In manager-led models, Engineering Managers are typically held accountable for team delivery. In cross-functional models, Agile Delivery Managers help guide improvement but don’t directly own delivery outcomes. In either case, someone helps the team improve.

That role should be evaluated, but not based on the raw numbers alone. Instead, assess how they supported improvement.

Thoughts on assessing “Guiding Team Improvement”:

Bottleneck Identification

  • Did they help surface and clarify constraints?
  • Are bottlenecks discussed and addressed

Team-Led Problem Solving

  • Did they enable experiments and reflection, not dictate fixes?

Use of Metrics for Insight, Not Pressure

  • Did they foster learning and transparency?

Facilitation of Improvement Over Time

  • Do the trends show intentional learning?

Cross-Team Alignment and Issue Escalation

  • Are they surfacing systemic issues beyond their team?

Focus on influence, not control. Assess those accountable to direct team performance improvements based on how they influence system improvements and support their teams.

  • Use metrics to guide coaching conversations, not to judge.
  • Evaluate managers based on how they improve the system and support their teams.
  • Reward experimentation, transparency, and alignment to business value.

Performance is bigger than one number. Metrics help tell the story, but they aren’t the story.

Sidebar: What if Gamification Still Improves the Metric?

I’ve heard some folks say, “I’m okay with gamification. If the number gets better, the team’s getting better.” That logic might work in the short term but breaks down over time. Here’s why:

  1. It often hides real issues.
  2. It focuses on optics instead of outcomes.
  3. It breaks feedback loops that drive learning.
  4. It leads to local, not systemic, improvement.

So, while gamification might improve the score, it doesn’t constantly improve the system and seldom as efficiently or sustainably.

If the goal is long-term performance, trust the process. Let teams learn from the data. Don’t let the number become the mission.

Metrics are just tools. If you treat them like a scoreboard, you’ll create fear. If you treat them like a flashlight, they’ll help you and your teams see what’s happening.

Don’t use metrics to judge individuals. Use them to guide conversations and, surface problems, and support improvement. That’s how you build trust and better systems.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

How Value Stream Management and Product Operating Models Complement Each Other

April 27, 2025 by philc

7 min read

“The future of software delivery isn’t about process versus structure; it’s about harmonizing both to deliver better, faster, and smarter.”

Next month, I am invited to meet with a senior leader from a large organization, who is also a respected industry figure, to discuss the Product Operating Model. I initially saw it as a good opportunity to prepare and share insights. Instead, it sparked an important realization.

In late 2020, I introduced Value Stream Management (VSM) to our organization, initiating the integration process in 2021. At the time, this marked the beginning of my understanding of VSM and our first attempt to implement it. Since then, we’ve gained more profound insights and valuable lessons, allowing us to refine our approach.

Recently, when asked about Value Stream Management (VSM), I explained that it helps make our Agile, Lean, and DevOps investments visible.
Now, with our VSM 1.5 approach, I highlight that it also makes our investments in Agile, Lean, DevOps, OKRs, and Outcomes more transparent.

Today, we are evolving our Value Stream Management (VSM) practices into what we now call VSM 1.5 (assuming we started at 0.9 or 1.0).

We took a more logical approach to redefining our Value Streams and aligning teams. We’ve also improved how we focus on metrics and hold discussions while requiring the anticipated outcomes of each Initiative or Epic to be documented in Jira. I outlined a strategy for leveraging team-level OKRs to align with broader business outcomes. I’ve also briefly touched on this concept in a few other articles.

As I prepared for this upcoming meeting, I came to a surprising realization:

We weren’t just implementing Value Stream Management, we were organically integrating Product Operating Model (POM) principles alongside it.

It wasn’t planned initially, but it’s now clear we weren’t choosing between two models. We were combining them, which became the foundation for our next level of operational maturity. This evolution reflects our commitment to continuously improving and aligning our methodologies to deliver greater customer and business impact.

Value Stream Management and the Product Operating Model

In software engineering, a value stream refers to the steps and activities involved in delivering a product or service to the customer. Value Stream Management (VSM) is the practice of optimizing this flow to improve speed, quality, and customer value.

A Product Operating Model (POM) serves as the blueprint for how a company designs, builds, and delivers software products. It ensures that teams, processes, and investments are aligned to maximize the customer’s value, driven by clear anticipated outcomes.

At first glance, Value Stream Management and the Product Operating Model are separate approaches, each with its terminology and focus. But when you look deeper, they share the same fundamental spirit: ensuring that our work creates meaningful value for customers and the business.

Despite this shared purpose, their emphasis differs slightly:

  • VSM focuses primarily on optimizing the flow of work, identifying bottlenecks, improving efficiency, and making work visible from idea to customer impact.
  • POM focuses on structuring teams and organizing ways of working, ensuring that ownership, funding, and decision-making are aligned to achieve clear, outcome-driven goals.

Together, they are not competing models but complementary disciplines: one sharpening how work flows, the other sharpening how teams are structured to deliver purposeful outcomes.

The key difference is where they start:

  • VSM starts with flow efficiency and system visibility.
  • POM starts with structure and ownership of the business outcome.

Why Combining POM and VSM Creates a Stronger Operating Model

Structure without optimized flow risks bureaucracy and stagnation.

Flow optimization without clear ownership and purpose risks fragmentation and, worse, the acceleration of delivering the wrong things faster.

Without aligning structure and flow to meaningful business and customer outcomes, organizations may become highly efficient at producing outputs that ultimately fail to drive real value.

Together, they provide what modern digital organizations need:

  • Product Operating Model (POM): Clear ownership, accountability, and alignment to expected business and customer outcomes.
  • Value Stream Management (VSM): Optimized, visible, and continuously improving flow of work across the organization.
  • Both combined: A complete operating model that structures teams around value and ensures that value flows efficiently to the customer.

When combined, POM and VSM offer a holistic view, structuring teams with purpose and optimizing how that purpose is realized through efficient delivery.

Industry Research: Reinforcing the Shift Toward Outcomes
Recent research reinforces the importance of this convergence. Planview’s 2024 Project to Product State of the Industry Report 1 found that elite-performing organizations are three times more likely to use cascading OKRs and measure success through business outcomes rather than output metrics. They are also twice as likely to regularly review Flow Metrics, confirming that outcome-driven practices combined with flow efficiency are becoming the new standard for high-performing organizations.

“Structure gives us ownership. Flow gives us visibility. Outcomes give us purpose. The strongest organizations master all three.”

Our Journey: VSM 1.5 as a Harmonization of POM and VSM

As we’ve matured our approach, it’s become clear that many of the practices we are implementing through VSM 1.5 closely align with the core principles of the Product Operating Model:

  • Clear Value Stream Identity:
    Using Domain-Driven Design (DDD) to define real business domains mirrors POM’s emphasis on persistent product boundaries.
  • Outcome Ownership:
    Mandating anticipated and actual outcomes aligns directly with POM’s shift from measuring outputs to business impacts.
  • Cross-functional Accountability:
    Structuring teams around value streams, not just skills or departments mirrors the cross-functional empowerment central to POM.
  • Flow Visibility and Metrics:
    Monitoring flow efficiency, team health, and quality reflects VSM’s original intent and POM’s focus on systemic improvement.
  • Customer-Centric Thinking:
    Closing the loop to validate outcomes ensures that teams remain connected to customer value, not just internal delivery milestones.

In short, without realizing it at first, VSM 1.5 evolved into a model that harmonizes the structural clarity of the Product Operating Model with the operational discipline of Value Stream Management.

Recognizing Our Current Gaps

While VSM 1.5 represents a significant step forward, it is not the final destination. There are important areas where we are still evolving:

  • Mid-Level OKR Development: While we have mandated anticipated outcomes at the initiative level, consistently translating these into clear, mid-level OKRs and connecting team efforts explicitly to business outcomes remains a work in progress. Strengthening this bridge will be critical to our long-term success.
  • Funding by Product/Value Stream: Today, our funding models still follow more traditional structures. Based on my experience across the industry, evolving to product-based funding will require a longer-term cultural shift. However, we are laying the necessary foundation by focusing on outcome-driven initiatives, clear value stream ownership, and understanding the investment value of teams.

These gaps are not signs of failure. They prove we are building the muscle memory needed to achieve lasting, meaningful change.

The Practical Benefits We Are Seeing and Expect to See

  • Stronger alignment between Product, Architecture, and Delivery.
  • Reduced cognitive load for teams working within clear domain boundaries.
  • Clearer prioritization, alignment, and purpose based on customer and business value.
  • A cultural shift toward accountability not just for delivery but for results.
  • Faster, better-informed decisions from improved visibility and flow insights.
  • Sustained operational efficiency improvements through retrospectives, insights, and continuous experimentation.

Something to Think About for Leaders

If you’re leading digital transformation, don’t limit yourself to choosing a Product Operating Model or Value Stream Management.

The real transformation happens when you intentionally combine both:

  • Structure teams around customer and business value.
  • Optimize how work flows through those teams.
  • Hold teams accountable not just for delivery but for real, measurable outcomes.
  • Continuously learn and improve by leveraging data insights and closing the feedback loop.

The future of software delivery isn’t about process versus structure. It’s about harmonizing both to deliver better, faster, and smarter.

What We’ve Been Building

Preparing for this meeting has helped crystallize what we’ve been building: a modern operating model that combines ownership, flow, and outcomes, putting customer and business value at the center of everything we do.

While our journey continues, and some cultural shifts are still ahead, we have built the foundation for a more outcome-driven, operationally efficient, and scalable future.

While there’s still work to be done and cultural changes ahead, we’ve laid the groundwork for a future that is more focused on outcomes, efficient in operations, and ability to scale.

I’m looking forward to the upcoming conversation, which will walk through the Product Operating Model, learn from their approach, and explore how it aligns with, replaces, or complements our evolution with Value Stream Management. It’s a conversation about methods and how organizations are shifting from tracking outputs to delivering actual business impact.

Let’s keep the conversation going:
How is your organization evolving its operating model to drive outcomes over outputs, combining structure, flow, and purpose to create real value?

Related Articles

  1. From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable, April 12, 2025. Phil Clark.

References

  1. The 2024 Project to Product State of the Industry Report. Planview.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Product Delivery, Software Engineering

From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable

April 12, 2025 by philc

9 min read

Connect the dots: Show how engineering efforts drive business impact by linking their work to key organization metrics and outcomes. Highlight their value and contribution to overall success.

When Sarcasm Reveals Misalignment

Last week, one of my Agile Delivery Leaders brought forward a concern from her team that spoke volumes, not just about an individual, but about the kind of tension that quietly persists in even the most mature organizations.

She asked her team to define the expected outcome for a new Jira Epic, a practice I’ve asked all teams to adopt to ensure software investments align with business goals. However, it seems they struggled to identify the anticipated outcome. On top of that, a senior team member who’s been part of our transformation for years dismissed the idea instead of contributing to the discussion. She found herself in a difficult position, torn between the leader’s authority and her own responsibilities. He commented something like:

“Why are we doing this? This is stupid. Phil read another book, and suddenly we’re all expected to jump on board.”

When I first heard that comment secondhand, I felt a wave of anger; it struck me as pure arrogance. This leader chose not to share his perspective with me directly, perhaps for reasons he deemed valid. But as I thought more about it, I realized it wasn’t arrogance at all, but ignorance. Not malicious ignorance, but the kind that comes from discomfort, uncertainty, or an unwillingness to admit they no longer understand or align with where things are going. Comments like that are often defense mechanisms. They mask deeper resistance, reveal a lack of clarity, or quietly question whether someone still fits into a system evolving beyond their comfort zone.

This wasn’t about rejecting change or progress; it was about pushing back against how we’re evolving. Moments like this remind us that true transformation isn’t just about forging ahead; it’s about fostering belief and alignment in mindset and actions as we move forward.

Purpose-Driven Development: My Approach to Sustainable Alignment

I asked teams to define anticipated outcomes not to add overhead but to protect the integrity of the way we build software.

Over the past decade, I’ve worked hard to lead engineering our teams and organization out of the “feature factory” trap, where the focus is on output volume, velocity, and shipping for the sake of shipping. Through that experience, I developed Purpose-Driven Development (PDD), my definition of this term.

Purpose-driven development might sound like a buzzword, but it’s how we bring Agile and Lean principles to life. It ensures delivery teams aren’t just writing code; they’re solving the right problems for the right reasons with clear goals and intentions.

PDD is built on one core idea: every initiative, epic, and sprint should be based on a clear understanding of why it matters.

Anticipated Outcomes: A Small Practice That Changes Everything

To embed this philosophy into our day-to-day work, we introduced a simple yet powerful practice:

Every Epic or Initiative must include an “Anticipated Outcome.”

Just a sentence or two that answers:

  • What are we hoping to achieve by doing this work?
  • How will it impact the customer, the Business, or the platform?

We don’t expect perfection. We expect intention. The goal isn’t to guarantee results but to anchor the work in a hypothesis that can be revisited, challenged, or learned from.

This simple shift creates:

  • Greater alignment between teams and strategy
  • More meaningful prioritization
  • Opportunities to reflect on outcomes, not just outputs
  • Visibility across leadership into what we’re investing in

Who Might Push Back and Why That’s Okay

When we ask teams to define anticipated outcomes, it’s not about creating friction; it’s about creating focus. And this shouldn’t feel like a burden to most of the team.

I believe engineers will welcome it. Whether they realize it at first or not, this clarity gives them purpose. It ties their daily work to something that matters beyond code.

The only two roles I truly expect might feel frustration when asked to define anticipated outcomes are:

Product Managers and Technical Leaders.

And even that frustration? It’s understandable.

Product Managers often experience pain from not being involved early enough in the ideation or problem-definition stage. They may not know the anticipated outcome if they’re handed priorities from a higher-level product team without the context or autonomy to shape the solution. And that’s the problem, not the question itself, but the absence of trust and inclusion upstream.

For Technical Leaders, it often comes when advocating for tech debt work. They know the system needs investment but struggle to translate that into a clear business reason. I get it; it’s frustrating when you know the consequences of letting entropy creep in, but you haven’t been taught to describe that impact in terms of business value, customer experience, or system performance.

But that’s exactly why this practice matters.

Asking for an anticipated outcome isn’t a punishment. It’s an exercise in alignment and clarity. And if that exercise surfaces frustration, that’s not failure. It’s the first step toward better decision-making and stronger cross-functional trust.

Whether it’s advocating for feature delivery or tech sustainability, we can’t afford to work in a vacuum. Every initiative, whether shiny and new or buried in system debt, must have a reason and a result we’re aiming for.

Anticipated Outcomes First, But OKR Alignment Is the Future

When I introduced the practice of documenting anticipated outcomes in every Epic or Initiative, I also asked for something more ambitious: a new field in our templates to capture the parent OKR or Key Result driving the work.

The goal was simple but powerful:

If we claim to be an outcome-driven organization, we should know what outcome we’re aiming for and where it fits in our broader strategy.

I aimed to help teams recognize that their Initiatives or Epics could serve as team-level Key Results directly tied to overarching business objectives. After all, this work doesn’t appear by chance. It’s being prioritized by Product, Operations, or the broader Business for a deliberate purpose: to drive progress and advance the company’s goals.

But when I brought this to our Agile leadership group, the response was clear: this was too much to push simultaneously.

Some teams didn’t know the parent KR, and some initiatives weren’t tied to a clearly articulated OKR. Our organizational OKR structure was often incomplete, and we were missing the connective tissue between top-level objectives and team-level execution.

And they were right.

We’re still maturing in how we connect strategy to delivery. For many teams, asking for the anticipated outcome and the parent OKR at once felt like a burden, not a bridge.

So, we paused the push for now. My focus remains first on helping teams articulate the anticipated outcome. That alone is a leap forward. As we strengthen that muscle, I’ll help connect the dots upward, mapping team efforts to the business outcomes they drive, even if we don’t have the complete OKR infrastructure yet.

Alignment starts with clarity. And right now, clarity begins with purpose.

Without an anticipated outcome, every initiative is a dart thrown in the dark.

It might land somewhere useful or waste weeks of productivity on something that doesn’t matter.

Documenting the outcome gives us clarity and direction. It means we’re making strategic moves, not random ones. And it reduces the risk of high-output teams being incredibly productive… at the wrong thing.

Introducing the Feature Factory Ratio

To strengthen our focus on PDD and prioritize outcomes over outputs, we are introducing a new core insights metric as part of our internal diagnostics:

Feature Factory Ratio (FFR) =

(Number of Initiatives or Epics without Anticipated Outcomes / Total Number of Initiatives or Epics) × 100

The higher the ratio, the greater the risk of operating like a feature factory, moving fast but potentially delivering little that matters.

The lower the ratio, the more confident we can be that our teams are connecting their work to value.

This ratio isn’t about micromanagement, it’s about organizational awareness. It tells us where alignment is breaking down and where we may need to revisit how we communicate the “why” behind our work.

Why We Call It the Feature Factory Ratio

When I introduced this metric, I considered several other names:

  • Outcome Alignment Ratio – Clear and descriptive, but lacking urgency
  • Clarity of Purpose Index – Insightful, but a bit abstract
  • Value Connection Metric – Emphasizes intent, but sounds like another analytics KPI

Each option framed the idea well, but they didn’t hit the nerve I wanted to expose.

Ultimately, I chose the Feature Factory Ratio because it speaks directly to the cultural pattern we’re trying to break.

It’s provocative by design. It challenges teams and leaders to ask, “Are we doing valuable work or just shipping features?” It turns an abstract concept into a visible metric and surfaces conversations we must have when our delivery drifts from our strategy.

Sometimes, naming things with impact helps us lead the behavior change that softer language can’t.

Sidebar: Superficial Alignment, The Silent Threat

One of the biggest leadership challenges in digital transformation isn’t open resistance, it’s superficial alignment.

These senior leaders attend the workshops, adopt the lingo, and show up to the town halls, but when asked to change how they work or lead, they bristle. They revert. They roll their eyes or make sarcastic comments.

But they’re really saying: I’m not sure I believe in this, or I don’t know how I fit anymore.

The danger is: superficial alignment looks like progress, but it blocks true transformation. It creates cultural drag. It confuses teams and weakens momentum.

Moments like the one I shared remind me that transformation isn’t a checkbox but a leadership posture. And sometimes, those sarcastic comments? They’re your clearest sign of where real work still needs to happen.

Start Where You Are and Grow from There

We’re all at different points in our transformation journeys as individuals, teams, and organizations.

So, instead of reacting with frustration when someone can’t articulate an outcome or when a snide remark surfaces resistance, use it as a signal.

Meet your team where they are. Use every gap as a learning opportunity, not a leadership failure.

If a team can’t answer “What’s the anticipated outcome?” today, help them start asking it anyway. The point isn’t to have every answer right now. It’s to build the muscle so that someday, we will.

These questions aren’t meant to judge where we are. They’re meant to guide us toward where we’re trying to go, and this is the Work of Modern Software Leadership.

It’s easy to say we want to be outcome-driven. Embedding that belief into daily practice is harder, especially when senior voices or legacy habits push back.

But this is the work:

  • Aligning delivery to strategy
  • Teaching teams to think in terms of impact
  • Holding the line on purpose—even when it’s uncomfortable
  • Measuring not just what we ship but why we’re shipping it

Yes, I’ve read my fair share of books. Along the way, I’ve experienced key moments and expected outcomes that influenced my journey in adopting new initiatives within our division and organization, such as Value Stream Management and understanding what it means to deliver real value. I’ve led teams through transformation and seen what works. From my experience in our organization and working with other industry leaders, I’ve learned that software delivery with a clear purpose is more effective, empowering, and valuable for the Business, our customers, and the teams doing the work.


Leader’s Checklist: Outcome Alignment in Agile Teams

Use this checklist to guide your teams and yourself toward delivering work that matters.

1. Intent Before Execution

  • Is every Epic or Initiative anchored with a clear Anticipated Outcome?
  • Have we stated why this work matters to the customer, business, or platform?
  • Are we avoiding the trap of “just delivering features” without a defined end state?

2. Strategic Connection

  • Can this work be informally or explicitly tied to a higher-level Key Result, business goal, or product metric?
  • Are we comfortable asking, “What is the business driver behind this work?” even if it’s not written down yet?

3. Team-Level Awareness

  • Do developers, QA, and designers understand the purpose behind what they’re building?
  • Can the team articulate what success looks like beyond “we delivered it”?

4. Product Owner Empowerment

  • Has the Product Manager or Product Owner been involved in problem framing, or were they handed a solution from above?
  • Is that a signal of upstream misalignment if they seem disconnected from the outcome?

5. Tech Debt with Purpose

  • If the work is tech debt, have we articulated its impact on system reliability, scalability, or risk?
  • Can we tie this work back to customer experience, transaction volume, or long-term business performance?

6. Measurement & Reflection

  • Are we tracking how many Initiatives or Epics lack anticipated outcomes using the Feature Factory Ratio?
  • Do we ever reflect on anticipated vs. actual outcomes once work is delivered?

7. Cultural Leadership

  • Are we reinforcing that asking, “What’s the anticipated outcome?” is about focus, not control?
  • When we face resistance or discomfort, are we leading with curiosity instead of compliance?

Remember:

Clarity is a leadership responsibility.

If your teams don’t know why they’re doing the work, the real problem is upstream, not them.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Flow Retreat 2025: Practicing the Work Behind the Work

March 29, 2025 by philc

4 min read

The Flow Leadership Retreat was the vision of Steve Pereira, co-author of the recently released book Flow Engineering: From Value Stream Mapping to Effective Action, and Kristen Haennel, his partner in building communities rooted in learning, collaboration, and systems thinking. But this wasn’t a typical professional gathering. Rather than a conference packed with sessions and slides, they created an immersive experience designed to bring together professionals from diverse industries to step back, reflect, and practice what it truly means to improve the flow of work.

The setting, against the remote and stunning oceanfront of the Yucatán Peninsula, wasn’t just beautiful; it was intentional. Free from the usual distractions, it created space for focused thinking, deeper conversations, and clarity that rarely emerges in day-to-day operations.

When I joined this first-ever Flow Leadership Retreat in March 2025, I expected thoughtful discussions on delivery systems, value streams, and flow. What I didn’t expect was how much the environment, the people, and the open space to think differently would shift my entire perspective on how work works.

As someone who’s spent the last 4 years advocating for Value Stream Management (VSM) and building systems that improve visibility and flow, I came into the retreat hoping to sharpen those tools. I left with refined perspectives and a renewed appreciation for the power of stepping away from execution to examine the system itself.

Flow Before Framework

On Day 1, we didn’t jump straight into diagrams or frameworks. Instead, we challenged ourselves to define what flow really means, individually and collectively. Some participants reached for physics and nature metaphors; others spoke about momentum, energy, or alignment.

And that was the point.

We explored flow not just as a metric but also as a state of system performance, psychological readiness, and sometimes a barrier caused by misalignment between intention and execution.

We examined constraints, those visible and invisible forces that slow work down. We also examined interpersonal and systemic friction as a root cause of waste and a signal for improvement.

The Power of Shared Experience

Day 2 brought stories. Coaches, consultants, and enterprise leaders shared what it’s like to bring flow practices into environments shaped by legacy processes, functional silos, and outdated metrics.

We didn’t just talk about practices. We compared scars. We discussed what happens when flow improvements stall, how leadership inertia manifests, and why psychological safety is essential to sustain improvement.

The value wasn’t in finding a single answer but in hearing how others had wrestled with the same questions from different perspectives. We found resonance in our challenges and, more importantly, in our commitment to change.

Mapping the System: Day 3 and the Five Maps

It wasn’t until Day 3 that we thoroughly walked through the Five Flow Engineering Maps. By then, we had laid the foundation through shared language and intent. The maps weren’t theoretical. They became immediate tools for diagnosing where our systems break down.

Here’s how we practiced:

  • Outcome Mapping helped us clarify what improvement meant and what we are trying to change in the system.
  • Current State Mapping exposes how work flows through the system, where it waits, and why it doesn’t arrive where or when we expect it.
  • Dependency Mapping surfaced the invisible contracts between teams, the blockers that live upstream and downstream of us.
  • Constraint Mapping allowed us to dig deeper into patterns, policies, and structures that prevent meaningful flow.
  • Flow Roadmapping helped us prioritize where to start, what to address next, and how to keep system improvement from becoming another unmeasured initiative.

We didn’t just learn to see the system. We refined our skills by applying real-world case examples to improve them.

An Environment That Made Learning Flow

The villa, tucked away on the Yucatán coast, offered more than scenery. It offered permission to slow down, think, walk away from laptops, and walk into reflection. It gave us the space to surface ideas and hold them up to the breeze as some of our Post-it notes blew away.

That environment became part of the learning. It reminded us that improving flow isn’t just about the process. It’s also about the conditions for thinking, collaborating, and creating clarity.

Final Reflections

This retreat wasn’t about doing more work. It focused on collaboration from different perspectives and experiences, understanding how work flows through our systems, and finding ways to improve it that are sustainable, practical, and measurable.

It reaffirmed something I’ve long believed:

We fix broken or inefficient systems, unlocking the full potential of our people, our products, and our performance.

I left with more than frameworks. I left with conversations I’ll be thinking about for months, new ways to approach problems I thought I understood, and the clarity that comes only when you step outside the system to study it fully.

I’m grateful for the experience and energized for what’s next.

References

  1. Pereira, S. & Davis, A. (2024). Flow Engineering: From Value Stream Mapping to Effective Action. IT Revolution Press.

Filed Under: Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact