• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Practices and Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Lean

We Have Metrics. Now What?

May 11, 2025 by philc

6 min read

A Guide for Legacy-Minded Leaders on Using Metrics to Drive the Right Behavior

From Outputs to Outcomes

A senior executive recently asked a VP of Engineering and the Head of Architecture for industry benchmarks on Flow Metrics. At first, this seemed like a positive step, shifting the focus from individual output to team-level outcomes. However, the purpose of the request raised concerns. These benchmarks were intended to evaluate engineering managers’ performance for annual reviews and possibly bonuses.

That’s a problem. Using system-level metrics to judge individual performance is a common mistake. It might look good on paper but often hides deeper issues. This approach is for senior leaders adopting team-level metrics who want to use them effectively. You’ve chosen better metrics. Now, let’s make sure they work as intended. It risks turning system-level signals into personal scorecards, creating the dysfunction these metrics are meant to reveal and improve. Using metrics this way negates their value and invites gaming over genuine improvement.

To clarify, the executive’s team structure follows the Engineering Manager (EM) model, where EMs are responsible for the performance of the delivery team. In contrast, I support an alternative approach with autonomous teams built around team topologies. These teams include all the roles needed to deliver value, without a manager embedded in the team. These are two common but very different models of team structure and performance evaluation.

This isn’t the first time I’ve seen senior leaders misuse qualitative metrics, and it likely won’t be the last. So I asked myself: Now that more leaders have agreed to adopt the right metrics, do they know how to use them responsibly?

I will admit that I was frustrated to learn of this request, but the event inspired me to create a guide for leaders, especially those used to traditional, output-focused models who are new to Flow Metrics and team-level measurement. You’ve picked the right metrics; now comes the challenge: using them effectively without causing harm.

How to Use Team Metrics Without Breaking Trust or the System

1. Start by inviting teams into the process

  • Don’t tell them, “Flow Efficiency must go up 10%.”
  • Ask instead: “Here’s what the data shows. What’s behind this? What could we try?”

Why: Positive intent. Teams already want to improve. They’ll take ownership if you bring them into the process and give them time and space to act. Top-down mandates might push short-term results, but they usually kill long-term improvement.

2. Understand inputs vs. outputs

  • Output metrics (like Flow Time, PR throughput, or change failure rate) are results. You don’t control them directly.
  • Input metrics (like review turnaround time or number of unplanned interruptions) reflect behaviors teams can change.

Why: If you set targets on outputs, teams won’t know what to do. That’s when you get gaming or frustration. Input metrics give teams something they can improve. That’s how you get real system-level change.

I’ve been saying this for a while, and I like how Abi Noda and the DX team explain it: input vs. output metrics. It’s the same thing as leading vs. lagging indicators. Focus on what teams can influence, not just what you want to see improve.

3. Don’t turn metrics into targets

When a measure becomes a target, it stops being useful.

  • Don’t turn system health metrics into KPIs.
  • If people feel judged by a number, they’ll focus on making the number look good instead of fixing the system.

Why: You’ll get shallow progress, not real change. And you won’t know the difference because the data will look better. The cost? Lost trust, lower morale, and bad decisions.

4. Always add context

  • Depending on the situation, a 10-day Flow Time might be great or terrible.
  • Ask about the team’s product, the architecture, the kind of work they do, and how much unplanned work they handle.

Why: Numbers without context are misleading. They don’t tell the story. If you act on them without understanding what’s behind them, you’ll create the wrong incentives and fix the bad things.

5. Set targets the right way

  • Not every metric needs a goal.
  • Some should trend up; others should stay stable.
  • Don’t use blanket rules like “improve everything by 10%.”

Why: Metrics behave differently. Some take months to move. Others can be gamed easily. Think about what makes sense for that metric in that context. Real improvement takes time; chasing the wrong number can do more harm than good.

6. Tie metrics back to outcomes and the business

  • Don’t just say, “Flow Efficiency improved.” Ask, what changed?
    • Did we deliver faster?
    • Did we reduce the cost of delay?
    • Did we create customer value?

If you’ve read my other posts, I recommend tying every epic and initiative to an anticipated outcome. That mindset also applies to metrics. Don’t just look at the number. Ask what value it represents.

Also, it’s critical that teams use metrics to identify their bottleneck. That’s the key. Real flow improvement comes from fixing the biggest constraint.

If you improve something downstream of the bottleneck, you’re not improving flow. You’re just making things look better in one part of the system. It’s localized and often a wasted effort.

Why: If the goal is better business outcomes, you must connect what the team does with how it moves the needle. Metrics are just the starting point for that conversation.

7. Don’t track too many things

  • Stick to 3-5 input metrics at a time.
  • Make these part of retrospectives, not just leadership dashboards.

Why: Focus drives improvement. If everything is a priority, nothing is. Too many metrics dilute the team’s energy. Let them pick the right ones and go deep.

8. Build a feedback loop that works

  • Metrics are most useful when teams review them regularly.
  • Make time to reflect and adapt.

We’re still experimenting with what cadence works best. Right now, monthly retrospectives are the minimum. That gives teams short feedback loops to adjust their improvement efforts. A quarterly check-in is still helpful for zooming out. Both are valuable. We’re testing these cycles, but they give teams enough time to try, reflect, and adapt.

Why: Improvement requires learning. Dashboards don’t improve teams. Feedback does. Create a rhythm where teams can test ideas, measure progress, and shift direction.

A Word of Caution About Using Metrics for Performance Reviews

Some leaders ask, “Can I use Flow Metrics to evaluate my engineering managers?” You can, but it’s risky.

Flow Metrics tell you how the system is performing. They’re not designed to evaluate individuals. If you tie them to bonuses or promotions, you’ll likely get:

  • Teams gaming the data
  • Managers focus on optics, not problems
  • Reduced trust and openness

Why: When you make metrics part of a performance review, people stop using them for improvement. They stop learning. They play it safe. That hurts the team and the system.

Here’s what you can do instead:

  • Use metrics to guide coaching conversations, not to judge.
  • Evaluate managers based on how they improve the system and support their teams.
  • Reward experimentation, transparency, and alignment to business value.

Performance is bigger than one number. Metrics help tell the story, but they aren’t the story.

Sidebar: What if Gamification Still Improves the Metric?

I’ve heard some folks say, “I’m okay with gamification; if the number gets better, the team gets better.”

I get where they’re coming from. Sometimes, gamifying a number can move. But here’s the problem:

  1. It often hides the real issues.
  2. It encourages people to optimize for appearances, not outcomes.
  3. It breaks the feedback loop you need to find the real constraints.
  4. It builds a culture of avoidance instead of learning.

So, while gamification might improve the score, it doesn’t constantly improve the system and rarely as efficiently as intentional, transparent work on the problem.

If the goal is long-term performance, trust the process. Let teams learn from the data. Don’t let the number become the mission.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

How Value Stream Management and Product Operating Models Complement Each Other

April 27, 2025 by philc

7 min read

“The future of software delivery isn’t about process versus structure; it’s about harmonizing both to deliver better, faster, and smarter.”

Next month, I am invited to meet with a senior leader from a large organization, who is also a respected industry figure, to discuss their Product Operating Model. I initially saw it as a good opportunity to prepare and share insights. Instead, it sparked an important realization.

In late 2020, I introduced Value Stream Management (VSM) to our organization, initiating the integration process in 2021. At the time, this marked the beginning of my understanding of VSM and our first attempt to implement it. Since then, we’ve gained more profound insights and valuable lessons, allowing us to refine our approach.

Recently, when asked about Value Stream Management (VSM), I explained that it helps make our Agile, Lean, and DevOps investments visible.
Now, with our VSM 1.5 approach, I highlight that it also makes our investments in Agile, Lean, DevOps, OKRs, and Outcomes more transparent.

Today, we are evolving our Value Stream Management (VSM) practices into what we now call VSM 1.5 (assuming we started at 0.9 or 1.0).

We took a more logical approach to redefining our Value Streams and aligning teams. We’ve also improved how we focus on metrics and hold discussions while requiring the anticipated outcomes of each Initiative or Epic to be documented in Jira. I outlined a strategy for leveraging team-level OKRs to align with broader business outcomes. I’ve also briefly touched on this concept in a few other articles.

As I prepared for this upcoming meeting, I came to a surprising realization:

We weren’t just implementing Value Stream Management, we were organically integrating Product Operating Model (POM) principles alongside it.

It wasn’t planned initially, but it’s now clear we weren’t choosing between two models. We were combining them, which became the foundation for our next level of operational maturity. This evolution reflects our commitment to continuously improving and aligning our methodologies to deliver greater customer and business impact.

Value Stream Management and the Product Operating Model

In software engineering, a value stream refers to the steps and activities involved in delivering a product or service to the customer. Value Stream Management (VSM) is the practice of optimizing this flow to improve speed, quality, and customer value.

A Product Operating Model (POM) serves as the blueprint for how a company designs, builds, and delivers software products. It ensures that teams, processes, and investments are aligned to maximize the customer’s value, driven by clear anticipated outcomes.

At first glance, Value Stream Management and the Product Operating Model are separate approaches, each with its terminology and focus. But when you look deeper, they share the same fundamental spirit: ensuring that our work creates meaningful value for customers and the business.

Despite this shared purpose, their emphasis differs slightly:

  • VSM focuses primarily on optimizing the flow of work, identifying bottlenecks, improving efficiency, and making work visible from idea to customer impact.
  • POM focuses on structuring teams and organizing ways of working, ensuring that ownership, funding, and decision-making are aligned to achieve clear, outcome-driven goals.

Together, they are not competing models but complementary disciplines: one sharpening how work flows, the other sharpening how teams are structured to deliver purposeful outcomes.

The key difference is where they start:

  • VSM starts with flow efficiency and system visibility.
  • POM starts with structure and ownership of the business outcome.

Why Combining POM and VSM Creates a Stronger Operating Model

Structure without optimized flow risks bureaucracy and stagnation.

Flow optimization without clear ownership and purpose risks fragmentation and, worse, the acceleration of delivering the wrong things faster.

Without aligning structure and flow to meaningful business and customer outcomes, organizations may become highly efficient at producing outputs that ultimately fail to drive real value.

Together, they provide what modern digital organizations need:

  • Product Operating Model (POM): Clear ownership, accountability, and alignment to expected business and customer outcomes.
  • Value Stream Management (VSM): Optimized, visible, and continuously improving flow of work across the organization.
  • Both combined: A complete operating model that structures teams around value and ensures that value flows efficiently to the customer.

When combined, POM and VSM offer a holistic view, structuring teams with purpose and optimizing how that purpose is realized through efficient delivery.

Industry Research: Reinforcing the Shift Toward Outcomes
Recent research reinforces the importance of this convergence. Planview’s 2024 Project to Product State of the Industry Report 1 found that elite-performing organizations are three times more likely to use cascading OKRs and measure success through business outcomes rather than output metrics. They are also twice as likely to regularly review Flow Metrics, confirming that outcome-driven practices combined with flow efficiency are becoming the new standard for high-performing organizations.

“Structure gives us ownership. Flow gives us visibility. Outcomes give us purpose. The strongest organizations master all three.”

Our Journey: VSM 1.5 as a Harmonization of POM and VSM

As we’ve matured our approach, it’s become clear that many of the practices we are implementing through VSM 1.5 closely align with the core principles of the Product Operating Model:

  • Clear Value Stream Identity:
    Using Domain-Driven Design (DDD) to define real business domains mirrors POM’s emphasis on persistent product boundaries.
  • Outcome Ownership:
    Mandating anticipated and actual outcomes aligns directly with POM’s shift from measuring outputs to business impacts.
  • Cross-functional Accountability:
    Structuring teams around value streams, not just skills or departments mirrors the cross-functional empowerment central to POM.
  • Flow Visibility and Metrics:
    Monitoring flow efficiency, team health, and quality reflects VSM’s original intent and POM’s focus on systemic improvement.
  • Customer-Centric Thinking:
    Closing the loop to validate outcomes ensures that teams remain connected to customer value, not just internal delivery milestones.

In short, without realizing it at first, VSM 1.5 evolved into a model that harmonizes the structural clarity of the Product Operating Model with the operational discipline of Value Stream Management.

Recognizing Our Current Gaps

While VSM 1.5 represents a significant step forward, it is not the final destination. There are important areas where we are still evolving:

  • Mid-Level OKR Development: While we have mandated anticipated outcomes at the initiative level, consistently translating these into clear, mid-level OKRs and connecting team efforts explicitly to business outcomes remains a work in progress. Strengthening this bridge will be critical to our long-term success.
  • Funding by Product/Value Stream: Today, our funding models still follow more traditional structures. Based on my experience across the industry, evolving to product-based funding will require a longer-term cultural shift. However, we are laying the necessary foundation by focusing on outcome-driven initiatives, clear value stream ownership, and understanding the investment value of teams.

These gaps are not signs of failure. They prove we are building the muscle memory needed to achieve lasting, meaningful change.

The Practical Benefits We Are Seeing and Expect to See

  • Stronger alignment between Product, Architecture, and Delivery.
  • Reduced cognitive load for teams working within clear domain boundaries.
  • Clearer prioritization, alignment, and purpose based on customer and business value.
  • A cultural shift toward accountability not just for delivery but for results.
  • Faster, better-informed decisions from improved visibility and flow insights.
  • Sustained operational efficiency improvements through retrospectives, insights, and continuous experimentation.

Something to Think About for Leaders

If you’re leading digital transformation, don’t limit yourself to choosing a Product Operating Model or Value Stream Management.

The real transformation happens when you intentionally combine both:

  • Structure teams around customer and business value.
  • Optimize how work flows through those teams.
  • Hold teams accountable not just for delivery but for real, measurable outcomes.
  • Continuously learn and improve by leveraging data insights and closing the feedback loop.

The future of software delivery isn’t about process versus structure. It’s about harmonizing both to deliver better, faster, and smarter.

What We’ve Been Building

Preparing for this meeting has helped crystallize what we’ve been building: a modern operating model that combines ownership, flow, and outcomes, putting customer and business value at the center of everything we do.

While our journey continues, and some cultural shifts are still ahead, we have built the foundation for a more outcome-driven, operationally efficient, and scalable future.

While there’s still work to be done and cultural changes ahead, we’ve laid the groundwork for a future that is more focused on outcomes, efficient in operations, and ability to scale.

I’m looking forward to the upcoming conversation, which will walk through the Product Operating Model, learn from their approach, and explore how it aligns with, replaces, or complements our evolution with Value Stream Management. It’s a conversation about methods and how organizations are shifting from tracking outputs to delivering actual business impact.

Let’s keep the conversation going:
How is your organization evolving its operating model to drive outcomes over outputs, combining structure, flow, and purpose to create real value?

Related Articles

  1. From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable, April 12, 2025. Phil Clark.

References

  1. The 2024 Project to Product State of the Industry Report. Planview.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Lean, Product Delivery, Software Engineering

From Feature Factory to Purpose-Driven Development: Why Anticipated Outcomes Are Non-Negotiable

April 12, 2025 by philc

9 min read

Connect the dots: Show how engineering efforts drive business impact by linking their work to key organization metrics and outcomes. Highlight their value and contribution to overall success.

When Sarcasm Reveals Misalignment

Last week, one of my Agile Delivery Leaders brought forward a concern from her team that spoke volumes, not just about an individual, but about the kind of tension that quietly persists in even the most mature organizations.

She asked her team to define the expected outcome for a new Jira Epic, a practice I’ve asked all teams to adopt to ensure software investments align with business goals. However, it seems they struggled to identify the anticipated outcome. On top of that, a senior team member who’s been part of our transformation for years dismissed the idea instead of contributing to the discussion. She found herself in a difficult position, torn between the leader’s authority and her own responsibilities. He commented something like:

“Why are we doing this? This is stupid. Phil read another book, and suddenly we’re all expected to jump on board.”

When I first heard that comment secondhand, I felt a wave of anger; it struck me as pure arrogance. This leader chose not to share his perspective with me directly, perhaps for reasons he deemed valid. But as I thought more about it, I realized it wasn’t arrogance at all, but ignorance. Not malicious ignorance, but the kind that comes from discomfort, uncertainty, or an unwillingness to admit they no longer understand or align with where things are going. Comments like that are often defense mechanisms. They mask deeper resistance, reveal a lack of clarity, or quietly question whether someone still fits into a system evolving beyond their comfort zone.

This wasn’t about rejecting change or progress; it was about pushing back against how we’re evolving. Moments like this remind us that true transformation isn’t just about forging ahead; it’s about fostering belief and alignment in mindset and actions as we move forward.

Purpose-Driven Development: My Approach to Sustainable Alignment

I asked teams to define anticipated outcomes not to add overhead but to protect the integrity of the way we build software.

Over the past decade, I’ve worked hard to lead engineering our teams and organization out of the “feature factory” trap, where the focus is on output volume, velocity, and shipping for the sake of shipping. Through that experience, I developed Purpose-Driven Development (PDD), my definition of this term.

Purpose-driven development might sound like a buzzword, but it’s how we bring Agile and Lean principles to life. It ensures delivery teams aren’t just writing code; they’re solving the right problems for the right reasons with clear goals and intentions.

PDD is built on one core idea: every initiative, epic, and sprint should be based on a clear understanding of why it matters.

Anticipated Outcomes: A Small Practice That Changes Everything

To embed this philosophy into our day-to-day work, we introduced a simple yet powerful practice:

Every Epic or Initiative must include an “Anticipated Outcome.”

Just a sentence or two that answers:

  • What are we hoping to achieve by doing this work?
  • How will it impact the customer, the Business, or the platform?

We don’t expect perfection. We expect intention. The goal isn’t to guarantee results but to anchor the work in a hypothesis that can be revisited, challenged, or learned from.

This simple shift creates:

  • Greater alignment between teams and strategy
  • More meaningful prioritization
  • Opportunities to reflect on outcomes, not just outputs
  • Visibility across leadership into what we’re investing in

Who Might Push Back and Why That’s Okay

When we ask teams to define anticipated outcomes, it’s not about creating friction; it’s about creating focus. And this shouldn’t feel like a burden to most of the team.

I believe engineers will welcome it. Whether they realize it at first or not, this clarity gives them purpose. It ties their daily work to something that matters beyond code.

The only two roles I truly expect might feel frustration when asked to define anticipated outcomes are:

Product Managers and Technical Leaders.

And even that frustration? It’s understandable.

Product Managers often experience pain from not being involved early enough in the ideation or problem-definition stage. They may not know the anticipated outcome if they’re handed priorities from a higher-level product team without the context or autonomy to shape the solution. And that’s the problem, not the question itself, but the absence of trust and inclusion upstream.

For Technical Leaders, it often comes when advocating for tech debt work. They know the system needs investment but struggle to translate that into a clear business reason. I get it; it’s frustrating when you know the consequences of letting entropy creep in, but you haven’t been taught to describe that impact in terms of business value, customer experience, or system performance.

But that’s exactly why this practice matters.

Asking for an anticipated outcome isn’t a punishment. It’s an exercise in alignment and clarity. And if that exercise surfaces frustration, that’s not failure. It’s the first step toward better decision-making and stronger cross-functional trust.

Whether it’s advocating for feature delivery or tech sustainability, we can’t afford to work in a vacuum. Every initiative, whether shiny and new or buried in system debt, must have a reason and a result we’re aiming for.

Anticipated Outcomes First, But OKR Alignment Is the Future

When I introduced the practice of documenting anticipated outcomes in every Epic or Initiative, I also asked for something more ambitious: a new field in our templates to capture the parent OKR or Key Result driving the work.

The goal was simple but powerful:

If we claim to be an outcome-driven organization, we should know what outcome we’re aiming for and where it fits in our broader strategy.

I aimed to help teams recognize that their Initiatives or Epics could serve as team-level Key Results directly tied to overarching business objectives. After all, this work doesn’t appear by chance. It’s being prioritized by Product, Operations, or the broader Business for a deliberate purpose: to drive progress and advance the company’s goals.

But when I brought this to our Agile leadership group, the response was clear: this was too much to push simultaneously.

Some teams didn’t know the parent KR, and some initiatives weren’t tied to a clearly articulated OKR. Our organizational OKR structure was often incomplete, and we were missing the connective tissue between top-level objectives and team-level execution.

And they were right.

We’re still maturing in how we connect strategy to delivery. For many teams, asking for the anticipated outcome and the parent OKR at once felt like a burden, not a bridge.

So, we paused the push for now. My focus remains first on helping teams articulate the anticipated outcome. That alone is a leap forward. As we strengthen that muscle, I’ll help connect the dots upward, mapping team efforts to the business outcomes they drive, even if we don’t have the complete OKR infrastructure yet.

Alignment starts with clarity. And right now, clarity begins with purpose.

Without an anticipated outcome, every initiative is a dart thrown in the dark.

It might land somewhere useful or waste weeks of productivity on something that doesn’t matter.

Documenting the outcome gives us clarity and direction. It means we’re making strategic moves, not random ones. And it reduces the risk of high-output teams being incredibly productive… at the wrong thing.

Introducing the Feature Factory Ratio

To strengthen our focus on PDD and prioritize outcomes over outputs, we are introducing a new core insights metric as part of our internal diagnostics:

Feature Factory Ratio (FFR) =

(Number of Initiatives or Epics without Anticipated Outcomes / Total Number of Initiatives or Epics) × 100

The higher the ratio, the greater the risk of operating like a feature factory, moving fast but potentially delivering little that matters.

The lower the ratio, the more confident we can be that our teams are connecting their work to value.

This ratio isn’t about micromanagement, it’s about organizational awareness. It tells us where alignment is breaking down and where we may need to revisit how we communicate the “why” behind our work.

Why We Call It the Feature Factory Ratio

When I introduced this metric, I considered several other names:

  • Outcome Alignment Ratio – Clear and descriptive, but lacking urgency
  • Clarity of Purpose Index – Insightful, but a bit abstract
  • Value Connection Metric – Emphasizes intent, but sounds like another analytics KPI

Each option framed the idea well, but they didn’t hit the nerve I wanted to expose.

Ultimately, I chose the Feature Factory Ratio because it speaks directly to the cultural pattern we’re trying to break.

It’s provocative by design. It challenges teams and leaders to ask, “Are we doing valuable work or just shipping features?” It turns an abstract concept into a visible metric and surfaces conversations we must have when our delivery drifts from our strategy.

Sometimes, naming things with impact helps us lead the behavior change that softer language can’t.

Sidebar: Superficial Alignment, The Silent Threat

One of the biggest leadership challenges in digital transformation isn’t open resistance, it’s superficial alignment.

These senior leaders attend the workshops, adopt the lingo, and show up to the town halls, but when asked to change how they work or lead, they bristle. They revert. They roll their eyes or make sarcastic comments.

But they’re really saying: I’m not sure I believe in this, or I don’t know how I fit anymore.

The danger is: superficial alignment looks like progress, but it blocks true transformation. It creates cultural drag. It confuses teams and weakens momentum.

Moments like the one I shared remind me that transformation isn’t a checkbox but a leadership posture. And sometimes, those sarcastic comments? They’re your clearest sign of where real work still needs to happen.

Start Where You Are and Grow from There

We’re all at different points in our transformation journeys as individuals, teams, and organizations.

So, instead of reacting with frustration when someone can’t articulate an outcome or when a snide remark surfaces resistance, use it as a signal.

Meet your team where they are. Use every gap as a learning opportunity, not a leadership failure.

If a team can’t answer “What’s the anticipated outcome?” today, help them start asking it anyway. The point isn’t to have every answer right now. It’s to build the muscle so that someday, we will.

These questions aren’t meant to judge where we are. They’re meant to guide us toward where we’re trying to go, and this is the Work of Modern Software Leadership.

It’s easy to say we want to be outcome-driven. Embedding that belief into daily practice is harder, especially when senior voices or legacy habits push back.

But this is the work:

  • Aligning delivery to strategy
  • Teaching teams to think in terms of impact
  • Holding the line on purpose—even when it’s uncomfortable
  • Measuring not just what we ship but why we’re shipping it

Yes, I’ve read my fair share of books. Along the way, I’ve experienced key moments and expected outcomes that influenced my journey in adopting new initiatives within our division and organization, such as Value Stream Management and understanding what it means to deliver real value. I’ve led teams through transformation and seen what works. From my experience in our organization and working with other industry leaders, I’ve learned that software delivery with a clear purpose is more effective, empowering, and valuable for the Business, our customers, and the teams doing the work.


Leader’s Checklist: Outcome Alignment in Agile Teams

Use this checklist to guide your teams and yourself toward delivering work that matters.

1. Intent Before Execution

  • Is every Epic or Initiative anchored with a clear Anticipated Outcome?
  • Have we stated why this work matters to the customer, business, or platform?
  • Are we avoiding the trap of “just delivering features” without a defined end state?

2. Strategic Connection

  • Can this work be informally or explicitly tied to a higher-level Key Result, business goal, or product metric?
  • Are we comfortable asking, “What is the business driver behind this work?” even if it’s not written down yet?

3. Team-Level Awareness

  • Do developers, QA, and designers understand the purpose behind what they’re building?
  • Can the team articulate what success looks like beyond “we delivered it”?

4. Product Owner Empowerment

  • Has the Product Manager or Product Owner been involved in problem framing, or were they handed a solution from above?
  • Is that a signal of upstream misalignment if they seem disconnected from the outcome?

5. Tech Debt with Purpose

  • If the work is tech debt, have we articulated its impact on system reliability, scalability, or risk?
  • Can we tie this work back to customer experience, transaction volume, or long-term business performance?

6. Measurement & Reflection

  • Are we tracking how many Initiatives or Epics lack anticipated outcomes using the Feature Factory Ratio?
  • Do we ever reflect on anticipated vs. actual outcomes once work is delivered?

7. Cultural Leadership

  • Are we reinforcing that asking, “What’s the anticipated outcome?” is about focus, not control?
  • When we face resistance or discomfort, are we leading with curiosity instead of compliance?

Remember:

Clarity is a leadership responsibility.

If your teams don’t know why they’re doing the work, the real problem is upstream, not them.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Flow Retreat 2025: Practicing the Work Behind the Work

March 29, 2025 by philc

4 min read

The Flow Leadership Retreat was the vision of Steve Pereira, co-author of the recently released book Flow Engineering: From Value Stream Mapping to Effective Action, and Kristen Haennel, his partner in building communities rooted in learning, collaboration, and systems thinking. But this wasn’t a typical professional gathering. Rather than a conference packed with sessions and slides, they created an immersive experience designed to bring together professionals from diverse industries to step back, reflect, and practice what it truly means to improve the flow of work.

The setting, against the remote and stunning oceanfront of the Yucatán Peninsula, wasn’t just beautiful; it was intentional. Free from the usual distractions, it created space for focused thinking, deeper conversations, and clarity that rarely emerges in day-to-day operations.

When I joined this first-ever Flow Leadership Retreat in March 2025, I expected thoughtful discussions on delivery systems, value streams, and flow. What I didn’t expect was how much the environment, the people, and the open space to think differently would shift my entire perspective on how work works.

As someone who’s spent the last 4 years advocating for Value Stream Management (VSM) and building systems that improve visibility and flow, I came into the retreat hoping to sharpen those tools. I left with refined perspectives and a renewed appreciation for the power of stepping away from execution to examine the system itself.

Flow Before Framework

On Day 1, we didn’t jump straight into diagrams or frameworks. Instead, we challenged ourselves to define what flow really means, individually and collectively. Some participants reached for physics and nature metaphors; others spoke about momentum, energy, or alignment.

And that was the point.

We explored flow not just as a metric but also as a state of system performance, psychological readiness, and sometimes a barrier caused by misalignment between intention and execution.

We examined constraints, those visible and invisible forces that slow work down. We also examined interpersonal and systemic friction as a root cause of waste and a signal for improvement.

The Power of Shared Experience

Day 2 brought stories. Coaches, consultants, and enterprise leaders shared what it’s like to bring flow practices into environments shaped by legacy processes, functional silos, and outdated metrics.

We didn’t just talk about practices. We compared scars. We discussed what happens when flow improvements stall, how leadership inertia manifests, and why psychological safety is essential to sustain improvement.

The value wasn’t in finding a single answer but in hearing how others had wrestled with the same questions from different perspectives. We found resonance in our challenges and, more importantly, in our commitment to change.

Mapping the System: Day 3 and the Five Maps

It wasn’t until Day 3 that we thoroughly walked through the Five Flow Engineering Maps. By then, we had laid the foundation through shared language and intent. The maps weren’t theoretical. They became immediate tools for diagnosing where our systems break down.

Here’s how we practiced:

  • Outcome Mapping helped us clarify what improvement meant and what we are trying to change in the system.
  • Current State Mapping exposes how work flows through the system, where it waits, and why it doesn’t arrive where or when we expect it.
  • Dependency Mapping surfaced the invisible contracts between teams, the blockers that live upstream and downstream of us.
  • Constraint Mapping allowed us to dig deeper into patterns, policies, and structures that prevent meaningful flow.
  • Flow Roadmapping helped us prioritize where to start, what to address next, and how to keep system improvement from becoming another unmeasured initiative.

We didn’t just learn to see the system. We refined our skills by applying real-world case examples to improve them.

An Environment That Made Learning Flow

The villa, tucked away on the Yucatán coast, offered more than scenery. It offered permission to slow down, think, walk away from laptops, and walk into reflection. It gave us the space to surface ideas and hold them up to the breeze as some of our Post-it notes blew away.

That environment became part of the learning. It reminded us that improving flow isn’t just about the process. It’s also about the conditions for thinking, collaborating, and creating clarity.

Final Reflections

This retreat wasn’t about doing more work. It focused on collaboration from different perspectives and experiences, understanding how work flows through our systems, and finding ways to improve it that are sustainable, practical, and measurable.

It reaffirmed something I’ve long believed:

We fix broken or inefficient systems, unlocking the full potential of our people, our products, and our performance.

I left with more than frameworks. I left with conversations I’ll be thinking about for months, new ways to approach problems I thought I understood, and the clarity that comes only when you step outside the system to study it fully.

I’m grateful for the experience and energized for what’s next.

References

  1. Pereira, S. & Davis, A. (2024). Flow Engineering: From Value Stream Mapping to Effective Action. IT Revolution Press.

Filed Under: Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Avoiding Flow Metric Confusion: Aligning Agile Work Hierarchy to Flow Items

February 17, 2025 by philc

10 min read

Adopting Flow Metrics without understanding Value Stream Management, Flow Items, and the Agile Work Hierarchy can create confusion and misalignment. Connecting Agile work to Flow Items becomes challenging without this foundation, making it harder to measure and improve value flow. This article simplifies these concepts to help teams and leaders align Agile work with Flow Metrics, enabling better visibility and greater efficiency.

A common issue I’ve noticed is misinterpreting Agile work elements when modeling Flow Items. Many organizations don’t fully understand or follow the standard Agile Work Hierarchy.

This article aims to guide leaders in understanding and applying a framework to align Flow Items while exploring essential Value Stream concepts. Drawing from my experience, I’ve crafted this guide to simplify and make these topics more accessible. I hope it will provide valuable insights and practical tools for others navigating similar challenges.

Misunderstanding Resulting in Confusion

Recently, I observed a Technology division new to Flow Metrics trying to map their Jira Issue types to Flow Items. It was clear they didn’t have a solid understanding of the scope and context of a Flow Item. On top of that, multiple product and platform teams had the freedom to structure and manage their Jira instances however they wanted, with no effort to follow even basic industry guidelines. The teams had so many custom types that some members even argued tasks were deliverables, causing confusion between hierarchy levels and Flow Items.

Clearing the Murky Waters

I created the following Agile Work Hierarchy model to help teams standardize their project tracking tools, have a common language, and better align with Flow Items:

The Agile Work Hierarchy as commonly defined by industry standards

  1. Theme: At the top of the hierarchy, themes represent broad organizational goals or focus areas. They align teams and ensure all work supports overall business objectives.
  2. Initiative: A business objective made up of multiple epics working together to achieve a broader organizational goal or outcome.
  3. Epic: A large body of work that support the initiative and that can be divided into multiple features or user stories.
  4. Feature: A product function or characteristic that delivers value to the user. It typically originates from an epic and is further broken down into user stories.
  5. User Story: A simple, user-centric description of a requirement or request that explains a specific feature or function needed by the user.
  6. Task: A granular unit of work necessary to complete a user story. These are actionable steps assigned to team members.

“”Theme” isn’t an official Scrum term but is commonly used in Agile practices. Whether you view Themes and Initiatives as the same or different doesn’t impact the focus of this discussion. The key is to identify the right level of work items for Flow Metrics and Flow Items, regardless of naming conventions, to effectively map core work items in your practice.

The names used for work types in delivery tracking can vary depending on the platform or tools. For example, our teams have worked with two popular tools: Jira and Rally. Rally and Jira use different terms and structures for Agile work items and workflows, particularly in their terminology and hierarchy.

Rally

  • Initiatives/Epics: Rally uses Portfolio Items to represent high-level work, such as initiatives or epics, that align with strategic goals.
  • Features: These are essentially lower-level Portfolio Items used to group related user stories.
  • User Stories: Rally uses user stories as a key work item, aligned with Agile principles.
  • Tasks: These are smaller parts of user stories, breaking the work into more manageable steps.

Jira

  • Initiatives/Epics: Jira uses Epics for large bodies of work and supports an additional layer (e.g., Initiatives) with advanced roadmaps in premium plans.
  • Features: Jira doesn’t specifically label items as “features.” Instead, it often uses Epics or custom issue types to serve a similar purpose.
  • User Stories: User stories are tracked as Issues and can be customized with different fields and workflows.
  • Tasks: Tasks are a basic issue type in Jira, with sub-tasks available for more detailed tracking.

This post isn’t about recommending a specific labeling structure. It’s about understanding your work hierarchy to align your flow items with the smallest deliverable unit that adds value to your customer or organization.

We use an initiative-epic-user story-task hierarchy to structure work. Teams organize epics, which break down into smaller user stories. These stories are the main work items, mapped to flow items and represent the smallest units of value delivered to production and customers. Our process doesn’t include a feature layer.

Addressing Release Misconceptions

Another common misunderstanding I encountered in a different division was how Product Managers using Aha! defined their work hierarchy. They categorized the highest-level program element as a “Release.” However, a release is not a work item in the industry-standard hierarchy. It is a scheduled deployment of a set of features or functionalities to the customer. A release can include multiple features, which in turn contain user stories and tasks. Misusing this term can lead to confusion in workflow tracking and alignment across teams. You don’t need to change how you and your teams label your levels, but it’s important to have clear, aligned definitions to help identify Flow Items.

Flow Items in Flow Metrics

Flow Items In Project to Product, Dr. Mik Kersten defines Flow Items as the primary units of work that move through a software value stream. These items fall into four categories:

  • Features: New functionality delivering business value to the customer.
  • Defects: Work items addressing quality issues impacting the user experience.
  • Risks: Tasks focused on mitigating security, compliance, or governance concerns.
  • Debts: Efforts improving system health, such as code refactoring or infrastructure updates.

These categories ensure that all work flowing through a value stream is accounted for and measured effectively.1

Clarifying Flow Items in the Agile Work Hierarchy

In Agile methodologies, it’s common practice to decompose Epics, Features, and User Stories into the smallest units of work that can deliver value independently. For instance, teams often aim to deliver User Stories sized at three to five story points, ensuring each piece is manageable and valuable.

Drawing insights from Mik Kersten’s Project to Product and the concept of Flow Metrics, I define a Flow Item is the smallest unit of work that delivers meaningful value to the user or the business. Even if all other work stops, delivering this item will still result in a clear benefit. Whether your organization calls them Features or User Stories, the key is that each Flow Item must deliver value on its own.

An anti-pattern to be cautious of involves breaking down work into segments that when delivered individually, don’t provide standalone value. For example, delivering only the URL endpoint for an API without its functional components doesn’t offer immediate value to the customer. Such practices can lead to misleading metrics, suggesting progress where the end-user perceives none.

To align Flow Items correctly with Agile work elements, I created a version of the Agile Work Hierarchy that follows the industry naming guidelines and highlights Features as the Flow Items (or User Stories depending on your context). Remember Flow Items should be the smallest units of work that deliver value to customers.

In the Agile hierarchy I’ve outlined, a Flow Item most closely aligns with a Feature. While User Stories are smaller, detailed requirements that contribute to a Feature, the Flow Framework operates at a higher level, focusing on the delivery of Features as complete units of value. 1

or depending on your context

or how we manage work

The goal is to choose one approach and stick with it. The key is to have a clear, standardized unit of work. It’s crucial to ensure that teams fully understand your definitions and how your work items align with Flow Items.

Clarifying Value Streams, Stages, and Mapping

This article does not aim to provide an in-depth education on Value Streams, as there are many excellent resources available to help you and your teams explore Value Streams and Value Stream Mapping. However, adopting Flow Metrics without first investing in this foundational knowledge can lead to significant challenges. Organizations often struggle to define a Value Stream, understand its components, and connect these elements to mapping efforts. Common difficulties tend to arise in four key areas:

  1. The Scope or Definition of a Value Stream
    • A Value Stream represents the entire end-to-end process from concept to cash (or value realization) of a Product.
    • A Value Stream should cover the entire product or product portfolio, including all teams and team members involved.
  2. The Stages or Phases of a Value Stream
    • Many teams confuse operational execution tasks with high-level Value Stream stages.
    • The Value Stream Model is typically presented as a series of distinct stages: Discovery, Delivery, Operation, and Support. Each stage represents a critical phase in the process of creating and delivering value.
  3. The Steps That Are Used to Create a Value Stream Map
    • Value Stream Mapping involves breaking down high-level stages into steps that contribute to the flow of value.
    • Steps such as Backlog, Planning, Development, Testing, Deployment, and Verification help identify where value is added and where inefficiencies occur.
    • These steps are different from granular tasks, which are the specific processes carried out within each step.
  4. The Confusion Between a Value Stream Mapping Step and a Process Mapping
    • One of the most common mistakes teams make is treating process mapping as value stream mapping.
    • Value Stream Mapping involves outlining the steps within a stage or phase of the Value Stream to identify delays, bottlenecks, and inefficiencies.
    • Process Mapping, on the other hand, is about detailing the specific activities within each step, such as coding, pull requests, code review, CI build, etc.

By addressing these four areas of confusion, teams can better align their understanding of Value Streams, ensuring Flow Metrics are applied correctly and effectively measured.

Value Stream Model

Value Stream Mapping

Process Mapping

Conclusion

When Agile work structures and Flow Items aren’t aligned, inefficiencies make measuring and improving value delivery with Flow Metrics harder. Linking Flow Items to the correct agile work item improves data quality and decision-making. Using a structured approach and best practices helps businesses maximize Flow Metrics and deliver results.

Bonus: Team Level OKRs

(Excerpt from Breaking Free from the Build Trap, Dec 25, 2024.)

OKRs align the team’s work with customer needs and organizational objectives. This alignment transforms sprints, epics, and initiatives from mere tasks into measurable milestones that drive meaningful business results.

OKRs bridge the gap between your team’s efforts and the outcomes that truly matter, encouraging a focus on value rather than speed. This mindset shift fosters intentional work and delivers impactful results.

Team Level OKRs tend to be either Initiatives or Epics.

Teams should define sprint goals based on outcomes, not tasks. Example: “Enhance system performance by improving response times by 5%” (outcome) versus “Complete three refactoring tickets” (task).

To develop a team-level OKR from a parent key result, you can use a method known as explicit alignment or cascading. This process transforms a higher-level key result into a focused objective for your team, ensuring clear alignment and purpose. Here’s an effective way to approach it:

  1. Define your desired outcome and ensure it aligns with a relevant Parent Key Result:
    Begin by reviewing the overarching OKRs at the company or department level. Identify a key result that aligns with your team’s responsibilities and can be directly influenced within your team’s scope.
  2. Transform the Key Result into an Objective:
    Reframe the parent key result as a clear objective for your team (the expected outcome). This objective will serve as the central focus of their efforts.
  3. Develop Supporting Key Results:
    Create 3-5 key results to help your team achieve this new objective. These should be specific, measurable, and aligned with the overall goal or outcome.
  4. Ensure Alignment:
    Make sure your team’s OKRs align with and support the higher-level objective they are based on. Set clear targets for each key result by defining what success looks like for your team about the parent key result. This will help keep your team focused and guide their efforts toward achieving the desired outcome.

References

  1. Project to Product: What Flows Through a Software Value Stream?, November 9, 2018 By Mik Kersten, https://blog.planview.com/what-flows-through-a-software-value-stream/

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery, Value Stream Management

Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics

February 15, 2025 by philc

9 min read

A Landscape of Confusion

By 2025, the tech industry will be more data-driven than ever, with engineering teams and leaders relying on metrics-driven insights to improve software delivery performance. However, the rise of Software Engineering Intelligence (SEI) platforms, existing Value Stream Management (VSM) platforms, and engineering metrics solutions have created a new challenge: marketing confusion.

Platform vendors, eager to differentiate their offerings, often blur the lines between these categories, making it increasingly difficult for technology leaders to distinguish between tools designed for engineering visibility versus those aimed at full-value stream optimization. This confusion leads to misaligned expectations, ineffective investments, and ultimately, frustration when organizations realize they’ve purchased a solution that doesn’t align with their needs.

This article serves as a follow-up to my August 2024 article:

  • Navigating the Digital Product Workflow Metrics Landscape: From DORA to Comprehensive Value Stream Management Platform Solutions

Since that article, the landscape has continued to evolve. SEI platforms have gained significant momentum, the DX Core 4 framework has been introduced, and VSM platforms are still in the conversation but with a shifting narrative.

Derek Holt, CEO of Digital.ai, has observed what he believes to be a major industry shift:

“While Value Stream Management continued to lose steam in 2024, we also saw the fast emergence of Software Engineering Intelligence (SEI) to take its place.” – Derek Holt, CEO of Digital.ai (SD Times)

However, other experts argue that VSM remains critical for aligning technology with business outcomes. Heather Spring, a senior Product Marketing expert, highlights the enduring value of VSM:

“VSM is more than a process improvement tool. It’s a framework for achieving measurable business outcomes.” – Heather Spring, Broadcom ValueOps (Broadcom Academy)

These contrasting viewpoints contribute to a growing problem:

  • Senior leaders struggle to distinguish between practices and platforms.
  • Different tools claim to offer “the future of software measurement,” but many serve only a narrow scope.
  • Organizations risk making misinformed investments based on incomplete narratives.

My Perspective as of February 2025

This article represents my point of view, given the current state of these technologies in early 2025. Over the past few years, I’ve observed significant shifts in how software measurement platforms are marketed, adopted, and integrated into organizations.

Importantly, this article does not dive into specific frameworks such as DORA, SPACE, Flow Metrics, or team sentiment and developer experience qualitative metrics. Instead, it focuses on the platforms that integrate these various metrics and how their marketing narratives have confused senior leaders unfamiliar with this space.

The Growing Demand for Data-Driven Insights

A few years back, DORA metrics gained much attention, and early tools focused on providing dashboards to track them. Over time, these tools evolved and rebranded as Software Engineering Intelligence (SEI) solutions.

SEI platforms have gained significant traction recently as organizations recognize the importance of developer efficiency, software delivery performance, and engineering operational health. These platforms provide granular insights into pull request cycle times, deployment frequencies, mean time to recovery (MTTR), and engineering throughput, giving leaders a clear picture of how efficiently software is being built and delivered.

At their core, SEI platforms are designed to improve engineering operations, helping teams identify delivery bottlenecks, optimize workflow efficiency, and measure team health. For organizations still struggling to improve operational efficiency within delivery, SEI platforms make perfect sense and provide immediate feedback on a team’s ability to consistently ship high-quality software.

However, SEI is not the same as Value Stream Management.

Understanding the Misconceptions Around SEI and VSM

The key difference between these platforms lies in their scope. While SEI focuses on engineering processes, VSM platforms aim to provide end-to-end visibility across the entire software development lifecycle—from ideation and discovery to delivery and operation. The challenge is that many platform vendors position SEI as interchangeable with VSM, when in reality, the two serve different yet complementary purposes.

This overlap often leads to misaligned expectations for companies adopting these tools. Organizations seeking deep engineering intelligence might mistakenly invest in a VSM platform. VSM platforms can be more expensive because they can gather and present data from all stages of the digital product lifecycle, and most vendors target the larger enterprise market. However, they may lack detailed data from the engineering delivery stage, or your organization may not be ready to handle this scope. Companies looking for full end-to-end visibility might purchase an SEI platform focusing only on software delivery, preventing them from improving efficiency in the other value stream stages.

I recommend using metrics that cover the entire Value Stream, but this might not be cost-effective for many companies. To avoid choosing the wrong platform, technology leaders should start by identifying their organization’s bottlenecks:

  • An SEI platform is likely the best investment if engineering teams struggle with delivery performance.
  • If the challenge extends beyond engineering into product management, discovery, and business alignment, VSM will become a stronger choice.

VSM Only Works If Other Departments Are Engaged

One of the biggest mistakes companies make when adopting VSM platforms is assuming that technology alone can drive the initiative. The reality is that unless other departments, such as marketing, sales, or business leadership, see the value in tracking their processes, a VSM approach will not deliver full business impact.

VSM investment becomes effective when organizations use it to:  

  • Track and improve discovery workflows, such as how product teams validate ideas before passing them to engineering.  
  • Interest in capturing how the marketing and sales teams collaborate to support the discovery phase of the value stream by capturing these processes.
  • Provide visibility into how cross-functional collaboration impacts the creation and delivery of digital products.

If your leadership and other departments discuss process visibility and workflow optimization, a VSM solution can drive enterprise-wide improvements in parallel with software delivery improvements. However, if the initiative is primarily driven by engineering, then SEI is the better starting point, allowing teams to optimize delivery before expanding to broader value stream visibility.

My Journey Evaluating These Platforms in 2022

Between 2022 and 2023, before the term Software Engineering Intelligence (SEI) became widely known, I carried out a detailed evaluation of platforms for my organization including:

  • Digital.ai
  • ServiceNow
  • Planview (then Tasktop Viz)
  • Jellyfish
  • Broadcom
  • LinearB
  • Plutora
  • Pluralsight Flow
  • Sleuth

Most of these platforms have improved since I first evaluated them. At the time, to the best of my knowledge, tools like LinearB, Sleuth, and Jellyfish had not yet embraced the term SEI. They were engineering analytics tools focused on measuring software delivery performance. Meanwhile, Tasktop, Digital.ai, Broadcom, and ServiceNow identified their offerings as Value Stream Management (VSM) platforms.

Given our organization’s transformation stage, I narrowed the choices down to two platforms:

  1. Tasktop Viz (now Planview Viz) – A strong VSM platform offering broad value stream insights but lacking delivery-stage granularity.
  2. LinearB – A strong delivery-focused platform providing deep engineering analytics but lacked visibility for ideation-to-delivery.

At the time, I saw clear trade-offs:

  • VSM platforms were valuable for tracking end-to-end flow but lacked deep delivery insights.
  • Delivery-focused platforms had engineering visibility but provided no insight into ideation or operations and support stages of the Value Stream.

In 2024, my preferred VSM platform acquired an SEI platform, confirming my earlier belief that larger VSM platforms without detailed delivery integration would eventually seek to expand their capabilities through acquisitions.

The Emergence of DX Core 4

Contributing to the ever-changing landscape and array of choices, a significant development is DX Core 4, a framework designed to unify existing developer productivity models. DX Core 4 brings together principles from DORA, SPACE, and DevEx, focusing on four key dimensions:

  • Speed – How quickly software moves through the system.
  • Effectiveness – The ability to deliver software that meets requirements.
  • Quality – Ensuring software meets standards while minimizing defects.
  • Impact – Measuring the business and customer impact of software delivery.

The intent behind DX Core 4 is to balance these metrics, preventing over-optimization in one area at the cost of another. GetDX’s announcement provides more details.

Breaking Down Marketing Myths

Myth 1: AI is Exclusive to SEI

The SEI Claim: SEI platforms are the future because they use AI for deep analytics and decision support, while VSM platforms do not.

The Reality: This is completely false. Leading VSM platforms, including Planview Viz and ServiceNow, have invested heavily in AI and generative AI-driven insights for deep data analysis, decision-making, and flow optimization.

SEI vs. VSM AI Capabilities

AI is not an SEI-exclusive feature. Instead, SEI and VSM integrate AI-driven analytics, SEI for engineering delivery metrics, and VSM for end-to-end business value flow across the organization.

Additionally, Planview’s Viz (copilot), Broadcom’s ValueOps (The Future of Value Stream Management Will Be Powered by AI), and others have integrated AI-driven workflow automation, tool integration, and predictive analytics, further demonstrating that AI capabilities are not exclusive to SEI.

Myth 2: SEI Covers the Full Value Stream

The SEI Claim: SEI platforms provide comprehensive visibility into software engineering performance.

The Reality: SEI platforms focus on the delivery stage of digital product development. They analyze developer productivity, team and operational efficiency, and software engineering health, but they do not track the full lifecycle of a digital product from ideation to operation. The SEI platform might be just what your organization needs at its current stage or context.

Comparing Scope: SEI vs. VSM

The Main Takeaway

The key takeaway in the SEI vs. VSM discussion isn’t which toolset is gaining momentum; it’s about scope. Don’t let marketing hype or opinions mislead you. While they can guide your decision, it’s crucial to understand the framework or label: SEI focuses on software delivery performance metrics within the build and deployment pipeline, while VSM spans the entire digital product lifecycle, discovery, delivery, operations, and support.

SEI tools offer limited insights into how efficiently code moves through development and deployment, but VSM goes further by asking: Are we delivering the right value efficiently across the entire product journey? When evaluating a VSM platform, consider both the quality of development and deployment metrics and the broader data they use to provide visibility across the value stream and organizational impact.

Rather than viewing SEI as a competitor to VSM, see it as a specialized tool for the delivery stage, while VSM ensures visibility, optimization, and continuous improvement across the entire value stream.

Choose the Right Metrics for Your Needs

Instead of asking, “Which platform is winning?” the better question is:

Which platform works best for your needs and provides the best visibility, whether for a specific part of your value stream or the entire one?

Organization and Engineering leaders should focus on what matters: selecting the right tools to address the correct problems. The question isn’t just about SEI vs. VSM. It’s about clearly understanding:

  • Where your organization’s bottlenecks exist
  • Who is driving the initiative
  • What level of visibility will drive the most impact

A few years ago, I started working with modern metrics like DORA. Since then, I’ve become a board member of the Value Stream Management Consortium, a champion for VSM implementation at my current organization, and an advocate for aligning software delivery metrics with business outcomes. This has given me the chance to explore the growing world of value stream management (VSM) and software engineering intelligence (SEI) platforms. Interestingly, a former colleague of mine is now a co-founder of one of these emerging platforms.

Vendors often use persuasive marketing to grab your attention and budget, but it’s important not to get caught up in the hype. Focus on your organization’s specific needs and the scope of the solutions offered. VSM and SEI platforms can differ greatly in features and cost, so finding the right fit means balancing your budget with your business priorities. Take the time to assess what truly matches your goals before making a decision.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: [email protected]

Filed Under: Agile, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Practices and Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact