• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

DevOps

When the System Fits, the Product Operating Model Works

November 27, 2025 by philc

9 min read

In every conversation about product delivery, team structures, and operating models, one pattern always stands out: there is no single correct structure for a modern software organization.

Leaders make decisions based on their architecture, constraints, history, and the goals they want to achieve. That is why we see so much variation across companies. Some organizations thrive with smaller, long-lived, self-managed cross-functional teams aligned to clear domains. Others depend on larger engineering manager-led groups, shared capability teams, or more centralized arrangements. These differences are not failures. They are the result of leaders shaping systems around their specific context.

My own experience has shown the strength of a particular combination: small, autonomous, cross-functional, long-lived product teams operating within a clear boundary, supported by Team Topologies thinking, Agile practices, DevOps and continuous delivery, Value Stream Management, and the Product Operating Model.

When these elements align with the architecture and constraints of the environment, they create clarity, flow, and accountability. When they do not, the same practices that thrive in one environment can struggle in another. The operating model only performs when the system beneath it supports it.

That is why I appreciated Thorsten Speil’s recent LinkedIn article on the Product Operating Model. He captured many of its strengths and also surfaced the areas where interpretation varies, including team size, organizational implications, discovery practices, and the broader operational impact of shifting to a product-oriented way of working. His post brought these nuances back into focus and highlighted how easily good ideas get misunderstood once they spread across different companies and contexts.

Two themes resurfaced during the discussion. They do not reflect issues with Thorsten’s article, but they are common points of confusion across the industry and worth exploring more deeply.

Misunderstanding 1: Marty Cagan is recommending larger teams

This belief usually comes from surface-level summaries rather than the substance of the work. In his book Transformed, Marty Cagan does not argue that big teams are inherently better. He is arguing against dividing teams into narrow technical slices that leave them unable to deliver value without coordinating across several other groups.

When a team owns only a small fragment of the flow, such as just the UI or database layer, its success depends on the progress of others. Ownership becomes diluted, and dependencies increase.

The real question is not whether a team is “small” or “large.” It is whether the team owns a complete slice of value: a domain or subdomain, or a coherent value stream, that it can deliver with minimal coordination.

In the organizations I’ve worked with, when we refactored monolithic or tangled systems and clarified domain boundaries, teams often became smaller, not larger, but crucially, they became whole and autonomous. What changed was their completeness, not just headcount.

What really determines the right team design is context, the architecture, domain boundaries, cognitive load, subject-matter expertise requirements, and the way work and value flow across the system.

If a subdomain or product in a portfolio is large enough and demands sustained work, a dedicated team may make sense. If several small subdomains or products share architecture or customer value, a single team or squad covering them together can reduce overhead. Team size and structure should align with system boundaries and value streams, not arbitrary org chart conventions.

Misunderstanding 2: The Product Operating Model replaces DevOps

These two ideas are sometimes mentioned together, but they address different layers of the organization.

DevOps improves the path from code to production. It strengthens feedback loops, automation, stability, and the ability to release safely and frequently. The Product Operating Model influences how decisions are made, how work is funded, how discovery and delivery are structured, and how teams are aligned to outcomes. It governs how strategy flows into teams.

One is about delivery performance. The other is about organizational direction. They are not interchangeable, and in a healthy system, they support each other. DevOps allows teams to learn quickly and respond rapidly. The Product Operating Model ensures that this capability is being applied to the right opportunities.

When organizations confuse the two, they end up with teams that can ship quickly but have no clarity on why, or teams that are empowered in theory but constrained by an outdated delivery path.

Where Value Stream Management fits

One of the most overlooked parts of the conversation is the role of Value Stream Management. Many organizations adopt the Product Operating Model with the right intentions, but without visibility into how work actually flows today. Value Stream Management provides that visibility. It shows where work gets stuck, where dependencies cluster, where priorities conflict, and where delays originate. It is the mechanism that connects architecture, team boundaries, and the customer journey into a single picture.

Without this visibility, a product-aligned structure becomes guesswork. Leaders cannot see the real bottlenecks, and teams cannot understand why autonomy feels out of reach. Flow metrics reinforce this visibility by making delays, load, efficiency, and distribution measurable. When VSM, flow metrics, and POM reinforce each other, teams gain stability and clarity. Ownership becomes real rather than symbolic.

The Product Operating Model also changes how work is funded

Another important idea that often gets overlooked is the shift in funding. The Product Operating Model is not simply a structural or cultural change; it changes how work is supported economically.

Instead of funding projects on an annual cycle, organizations fund products and the teams responsible for them. Teams are long-lived rather than assembled and disbanded. Prioritization is continuous rather than fixed once a year.

Outcomes replace scope as the primary measure of progress, and domain expertise becomes a long-term asset. Stable teams and stable funding reinforce each other and create an environment where real ownership and long-term accountability can thrive.

Architecture enables team autonomy

It is common to talk about rapid delivery, continuous discovery, and empowered teams, but none of these is possible unless the architecture supports them.

If components are tightly coupled, if deployments require several approvals, or if core systems or data are shared among many teams, autonomy becomes difficult to implement regardless of intention. Organizational charts cannot compensate for technical constraints.

The most effective team topologies emerge from systems with clear domain boundaries, separation of concerns, modularity, and platform capabilities that reduce cognitive load. When architecture and team design reinforce each other, teams can own outcomes. When they conflict, coordination overhead grows, and autonomy becomes harder to achieve.

Architecture choices shape, but do not dictate, the model

I often advocate for distributed systems and microservices because they reduce dependency load and allow teams to operate with greater independence. But that does not mean these architectures are right for every organization. Modular monoliths, macroservices, domain-oriented monoliths, and hybrid models can all support effective product teams when their boundaries are clear and consistent.

What matters most is that the architecture supports meaningful ownership. I have seen monolithic systems with strong modular structure outperform poorly partitioned microservices because the boundaries were more deliberate.

The Product Operating Model does not require microservices. It requires coherent ownership aligned with the architectural reality.

A monolithic system can still operate effectively under a Product Operating Model when teams have clear ownership boundaries. The fundamental idea behind the Product Operating Model is organizing around outcomes and customer value rather than technical layers.

Teams need responsibility for a meaningful, end-to-end part of the product, not just a narrow slice of the stack. When a monolith is structured with deliberate domain separation and disciplined layers, teams can still take ownership of specific product areas or value streams and make decisions within those boundaries.

At the same time, monolithic systems often introduce more coordination requirements. Shared code paths, tightly coupled components, and synchronized releases can create friction and increase dependency load. These challenges do not prevent the Product Operating Model from working, but they require more intentional communication, clearer boundaries, and stronger agreements around how teams collaborate inside the monolith.

The architecture does not have to be perfect; it simply needs to support coherent ownership. The clearer the system’s internal structure, the easier it is for teams to operate end to end without excessive coordination.

This is why context matters. The Product Operating Model succeeds when the system enables teams to own outcomes, regardless of whether the underlying architecture is a monolith, a modular monolith, or a distributed set of services.

Why context matters

Organizations often begin by asking whether they should adopt the Product Operating Model. A better question is what their current system allows and where the real constraints are.

You can adopt a Product Operating Model in a monolithic architecture, and many companies do. What matters most is whether teams can own meaningful areas of the product, make decisions with limited friction, and deliver improvements without excessive dependencies. Some monoliths support this quite well, particularly when structured with clear domain boundaries. Others are so tightly coupled that autonomy is difficult until parts of the system are modernized.

The model itself is rarely the constraint. The system and its boundaries are. Most failed transformations happen not because the Product Operating Model is flawed, but because leaders apply it without understanding the environment that must support it.

The real work is creating the conditions for POM to succeed

Organizations that succeed with the Product Operating Model share several characteristics. Their architecture supports autonomy. Their value streams are visible. Flow metrics guide decisions. Team structures match real domain boundaries. DevOps practices are mature enough to support rapid learning and delivery. And product, design, and engineering operate together as one system.

In these environments, the Product Operating Model does not feel like a framework. It is the natural way the organization should operate. It aligns people, technology, and strategy into a coherent system and gives teams the conditions they need to take real ownership.

What Really Determines Whether POM Succeeds

Most debate about the Product Operating Model focuses on whether it is the right model. That is not the most helpful place to begin. The more important question is whether the system can support long-term product ownership and sustained team autonomy.

The Product Operating Model is not only a team structure. It is a commitment to funding products rather than projects, supporting teams for the lifespan of the product, building and retaining domain expertise, prioritizing work continuously instead of annually, and evaluating progress through outcomes rather than activity. When these elements are combined with modern architecture, visibility into flow, and strong DevOps practices, the Product Operating Model becomes a practical and natural way to operate. Teams can own their work end-to-end and connect what they build to real customer value.

When organizations attempt to adopt the model without making these underlying adjustments, POM struggles. Team boundaries feel artificial, ownership breaks down, and delivery becomes a ceremony rather than a learning experience.

The more productive question is not whether to adopt the Product Operating Model, but rather how to do so. The practical question is what needs to change in the architecture, the flow of work, the funding model, and the team design so that a product-oriented way of working can thrive in this environment.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References and Further Reading

This article draws on ideas and practices that have shaped modern product development, organizational design, and software delivery. For readers who want to explore the concepts more deeply, the following works provide useful context.

Thorsten Speil – “You need to move to the Product Operating Model! … Really?” (2025), https://www.linkedin.com/pulse/you-need-move-product-operating-model-really-whats-thorsten-speil-2mhcf/
The original post that inspired this article and sparked a thoughtful discussion on how organizations interpret and apply POM principles in different contexts.

Marty Cagan – Transformed (2024)
Clear articulation of the Product Operating Model and the organizational conditions needed to support empowered product teams.

Matthew Skelton and Manuel Pais – Team Topologies
Guidance on service-aligned team structures, interaction modes, cognitive load, and organizational boundaries that support flow.

Value Stream Management Consortium – Project to Product Reports (2023–2024)
Industry research on flow metrics, product funding, and how organizations connect technology investments to actual business outcomes.

Dr. Nicole Forsgren, Jez Humble, and Gene Kim – Accelerate
Evidence-based insights into DevOps, continuous delivery, feedback loops, and the capabilities of high-performing engineering organizations.

Steve Pereira and Andrew Davis – Flow Engineering
Practical mapping techniques for visualizing system constraints, dependencies, and opportunities to improve value flow.

Eric Evans – Domain-Driven Design
Architectural foundations for creating clear domain boundaries that support coherent ownership in product-aligned teams.

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

The Price of Alignment

October 21, 2025 by philc

How Well-Intentioned Integration Can Undermine Modern Architecture and Team Autonomy

11 min read

Having been part of numerous acquisitions and technical due diligence efforts throughout my career, I have observed familiar patterns in mergers. Leadership evaluates architecture, the technology stack, team skills, and organizational structure to assess fit and value. I have seen how integration decisions can reinforce or erode what made an acquisition valuable.

In one recent integration, however, the process risked undoing the very capabilities that had made the acquired organization successful. Its architecture and team design, once aligned for speed and autonomy, were reshaped in ways that constrained both.

A large, established technology company, which I will call LegacyTech, acquired a smaller, fast-moving company, AgileWorks. LegacyTech’s architecture is still largely monolithic, shaped by years of prioritizing control, predictability, and reliability. Its teams are large, its operating rhythm consistent, and its management approach straightforward: one engineering manager for about eight engineers, one product manager per team, and a typical structure applied across the organization. Part of this structure reflected LegacyTech’s leadership instincts, and the other part reflected the very real need to standardize roles, expectations, and career frameworks across the combined company.

AgileWorks had spent nearly a decade transforming from a monolithic system into a microservices-based organization. Its teams were small, long-lived, cross-functional, and aligned to clearly defined domains and subdomains. Each team owned its services end to end, including logic, data, deployments, and the flow of value. They operated with local decision-making, shipped independently, and continuously improved without waiting for outside coordination.

At AgileWorks, engineering managers focused on people and career development rather than directing day-to-day delivery. Delivery teams included dedicated Agile leaders, product partners, technical leads, user experience, QA, and the skills required to deliver independently.

By the time of the acquisition, AgileWorks had become the kind of organization many aspire to be: fast, autonomous, and adaptable.

The Integration Challenge

After the acquisition, LegacyTech required all teams, both legacy and newly acquired, to adopt the same structure, reporting model, and span-of-control expectations. AgileWorks’ small, domain-aligned teams were reshaped to match LegacyTech’s organization. What seemed rational and efficient on paper quickly began influencing how work flowed in practice. Structure does not simply describe how people work. It defines how they work.

When Structure Shapes the System

Conway’s Law teaches that organizations design systems that mirror their communication structures.

At AgileWorks, small teams designed, built, and deployed small services. At LegacyTech, large teams built large systems. Both structures made sense in their respective architectures.

When the two companies merged, the system began to change, not because the code changed, but because the structure changed. Teams that once released independently now had to coordinate across domains. Engineering managers balanced conflicting priorities across multiple queues. Flow slowed. The architecture itself risked drifting toward a more coupled and synchronized model.

This was not personal or political. It was mechanics.

Understanding the Architectural Misalignment

LegacyTech was not wrong. It was simply optimized for a different world.

Its monolithic architecture enabled centralized decision-making, broad responsibilities, and larger teams. In that environment, consistency and uniform structures work well.

AgileWorks’ architecture required something different. It operated within a distributed microservice architecture with bounded contexts. Each service independently owns its logic, data, and deployment lifecycle. Because the architecture was modular, the teams were modular as well. Small teams were not a stylistic preference. They were a structural capability.

Seen through LegacyTech’s perspective, AgileWorks’ structure looked unfamiliar: more teams, fewer people per team, independent decisions, separate flows. Without curiosity about architectural context, autonomy can look like fragmentation.

Why the “Too Many SKUs” Question Misses the Point

As integration continued, a recurring question arose. “Why does AgileWorks have so many SKUs?”

To LegacyTech, the number seemed excessive. An SKU is simply an internal identifier for a product, but in AgileWorks’ system, each SKU represented a bounded context, a domain or subdomain with its own architecture, team, and flow of value.

This is the natural outcome of microservices. As Martin Fowler and Dave Farley describe them, microservices are small, autonomous, independently deployable, and aligned to a single, well-defined domain boundary. That was exactly how AgileWorks structured its system. Each SKU marked a clean separation of concerns, not a proliferation of products.

Microservices reduce dependency drag by allowing teams to work in parallel, which DORA research consistently shows is a predictor of higher performance. What LegacyTech viewed as unnecessary complexity was in fact a sign of architectural maturity.

The Hidden Cost of Consistency

Consistency creates clarity and predictability. It is appealing, especially in large organizations. But when consistency is applied without understanding architectural intent, it can silently erode the strengths the acquisition sought to preserve.

Reassigning domain-aligned teams into broader groups collapses boundaries that AgileWorks had intentionally kept separate. From the outside, the structure appears aligned. On the inside, queues form, decisions slow, and delivery suffers.

What appeared efficient became regression.

Leadership Context and Legacy Mindsets

LegacyTech’s leaders emphasized consistency across the combined organization. One executive summarized it plainly: “We cannot redesign all of our teams to match the company we acquired, so we will redesign theirs to match ours.” From their perspective, this was a practical decision.

With far more teams than AgileWorks, standardizing the smaller footprint seemed simpler and more efficient. AgileWorks’ leadership understood this dynamic well; it was the same approach they had used when integrating engineering teams during their own previous acquisitions.

LegacyTech’s desire for consistency created immediate pressure to reorganize AgileWorks’ teams. To meet these expectations while minimizing disruption, the AgileWorks department head adopted a phased hybrid approach.

Instead of dismantling domain boundaries outright, he grouped related subdomain teams under individual engineering managers until each manager had an average of eight software engineers in their hierarchy. This met LegacyTech’s span-of-control rules while protecting delivery continuity for committed roadmap work. Leadership and HR at LegacyTech formally approved the plan.

Although the plan satisfied the stated requirements, it did not align with how LegacyTech leaders mentally modeled team structure. They were accustomed to a one-engineering-manager-to-one-team pattern tied to a single codebase. Seeing multiple small subdomain teams grouped under a single engineering manager did not fit that worldview.

Despite backchannel conversations, no one approached the AgileWorks leader directly to understand why the hybrid structure existed or what it was designed to protect. Over time, this misunderstanding worked against him.

The original team design misunderstanding also set the stage for deeper structural changes. To eliminate ambiguity and fully align the organizations, AgileWorks was required to adopt LegacyTech’s management model. This meant removing the Agile Leader role from each team and shifting delivery responsibility directly to engineering managers.

The shift was more than procedural. It fundamentally redefined the role of the engineering manager at AgileWorks.

Engineering managers, who had previously focused on people development and coaching, were now accountable for day-to-day delivery, performance, coordination, and practice consistency. Their span of control increased from five to eight, requiring them to support two or sometimes three small subdomain teams, since most AgileWorks teams averaged three developers.

What had once been a role centered on enabling people quickly became one centered on directing delivery. The cognitive load increased, role boundaries blurred, and the structure that once allowed AgileWorks to move rapidly and independently became increasingly challenging to maintain.

What had once enabled flow and autonomy at the subdomain-scoped team level now introduced friction and confusion. These role changes did not remain at the organizational layer; they risked influencing how the architecture behaved.

This tension over roles and structure surfaced again in how leaders interpreted AgileWorks’ team sizes and scopes.

Team Design, Domain Thinking, and the Case for Larger Scopes

Misunderstanding also surfaced around team size and scope. AgileWorks had several small teams working within the same domain, which some LegacyTech leaders interpreted as fragmentation. The issue was not fragmentation. It was a missed opportunity to ask why the structure existed in the first place.

In Transformed, Marty Cagan describes a common pitfall in product transformations. Organizations create too many narrow teams that each own a thin slice of the product. These slices are too small to deliver real outcomes independently. Handoffs increase, dependencies grow, and accountability becomes unclear.

Cagan recommends not necessarily building bigger teams. It is to give small teams a larger scope. Increase what a team owns, not the number of people on it.

AgileWorks followed this principle. Drawing from Domain-Driven Design, Separation of Concerns, microservices, and distributed systems patterns:

  • Domains aligned to product portfolios
  • Subdomains aligned to individual products or major capabilities
  • Each team owned a full subdomain end-to-end

This structure gave every team deep domain expertise, architectural control, independent deployment capability, and clear ownership of outcomes. Small teams did not mean fragmented teams. They owned coherent, customer-facing capabilities aligned with product portfolios.

LegacyTech’s model relied on broader functional groupings within a monolithic system. Engineers were often reassigned to different teams based on capacity needs. That model works in monoliths but does not map cleanly to distributed systems where autonomy and boundary clarity matter.

Curiosity would have bridged this gap. A simple question about how AgileWorks’ teams aligned to product portfolios and subdomains might have made the structure clear and its purpose obvious.

When Experience Replaces Curiosity

A moment shared with me later captured the tension clearly. A long-tenured manager walked a LegacyTech senior leader through AgileWorks’ architecture and team structure. The leader responded, “I have been doing this for thirty years, and my playbook works. I do not understand how your organization works, nor do I care to at this point.”

This approach is a familiar leadership pattern. Playbooks shaped by years of success do not always map to new architectural contexts. One size does not fit all. Playbooks can be adjusted to match context and practices. Experience is valuable, but experience without curiosity becomes limiting. And in distributed systems, where autonomy and domain clarity matter, it can quietly become destructive.

When Naming Becomes a Surrogate for Understanding

Another unexpected friction point involved team names. Years earlier, AgileWorks allowed teams to name themselves. They chose names like Red, Blue, and Green. These names were cultural, not architectural.

Inside the organization, each team was clearly mapped to its domain, products, and value stream. Ownership was unambiguous.

Yet some LegacyTech leaders found the names confusing. Some made jokes about them. They expected teams to be self-describing, named after products. Ironically, LegacyTech’s argument contradicted itself. It also had several teams name themselves after animals, terms, and cultural references. Labels alone did not reflect product alignment on either side.

Asking a simple question, “How do these team names map to your products and domains,” would have resolved everything.

The Diligence Gap

AgileWorks’ success came not only from its code but from how its teams worked: decoupled, autonomous, and aligned with their architecture.

Ignoring that alignment risks dismantling the system that created the capability value in the first place.

You can acquire the product and organization, but if you do not understand the system that built it, changing one without respecting the other often produces unintended consequences.

When Capacity Pressure Resurrects Old Patterns

Another difference emerged in how each organization responded to capacity pressure. AgileWorks designed its teams to be long-lived. Engineers rarely moved between teams, which allowed domain expertise, trust, and ownership to develop over time.

LegacyTech worked differently. At the end of major delivery cycles, leaders reassigned engineers wherever demand was highest. Teams functioned as resource pools, flexible and frequently reshuffled. Engineers did not always have a choice, and these moves were often framed as career opportunities.

When demand exceeds supply, leaders fall back on the operating model they trust. In larger, monolithic organizations, pooling and trading engineers across teams can work because the teams themselves are large and the domains broad. But in a distributed architecture with small, domain-aligned subdomain teams of three developers, this same practice can have a significant negative impact. Removing a single engineer destabilizes the team’s knowledge base, disrupts flow, and undermines the deep domain context those teams rely on.

When people become fungible, domains may become fungible as well. When domains become fungible, ownership becomes shallow. When ownership becomes shallow, flow slows, defects rise, and quality declines.

LegacyTech was not acting in bad faith. They were relying on a model that had worked in their environment. But AgileWorks required long-lived teams because its architecture depended on them. When capacity was tight, AgileWorks moved teams to the work, not individual team members.

Why Context and Choice Determine Team Design

Modern leadership provides many frameworks to draw from: Cagan, Kersten, Team Topologies, Agile, Lean, DevOps, Value Stream Management, Product Operating Models, Scrum, XP, and Kanban. Each offers value, but none is universally correct. Their effectiveness depends entirely on the context in which they are applied.

Some leaders prefer fewer teams with broader scopes. Others prefer many small, domain aligned teams. Both approaches can succeed. Both can fail. The difference is not the model itself but the architecture, constraints, and business environment it must support.

So the question is never which team model is better. The real question is which structure fits the architecture, flow constraints, and organizational realities of this moment in time.

And even more importantly, do leaders understand why a structure existed in the first place. Most team designs are not arbitrary. They reflect hard-earned lessons about architectural boundaries, flow of work, domain ownership, operational needs, and past failures.

Curiosity is what makes these choices effective. It separates meaningful alignment from surface-level consistency. Without curiosity, even well-meaning integration decisions can erase the very patterns that made a system successful.

Closing Reflection

Through curiosity, I came to understand why LegacyTech made its choices. They were not dismissing AgileWorks’ model. They were responding to their own context, constraints, and operating history. Their decisions made sense inside the environment they knew.

This is the point.

This is not a story about who was right or wrong. It is a story about what happens when architecture and structure drift apart. When a monolithic organization acquires a microservices-driven one, success depends not only on integrating people and tools, but also on integrating understanding.

When you acquire a product, you also acquire the organizational DNA that built it. Structure, practices, team design, flow, and architecture evolve together. Change one without respecting the other, and the system will reshape itself in ways you may not expect.

Alignment is powerful when it is guided by curiosity. Curiosity turns alignment from imposition into learning. And understanding becomes the bridge between two different worlds, trying to operate as one.

Sustainable integration is not about enforcing a single model, but about recognizing the strengths each system brings and understanding how to align them without erasing what makes them effective.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


Influences and Further Reading

Established ideas from modern software architecture, team design, and flow-based organizational practices inform this article. The concepts discussed draw on Domain-Driven Design, microservices and distributed systems principles, Team Topologies, the Flow Framework, DevOps and Lean thinking, and contemporary product operating models.

Notable contributors to these bodies of work include Eric Evans, Martin Fowler, Dave Farley, Manuel Pais, Matthew Skelton, Marty Cagan, and Mik Kersten, as well as research from the DORA community (Accelerate). Their work has shaped much of today’s understanding of how architecture, team structure, and organizational context interact to influence delivery performance and long-term success.

Filed Under: Agile, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

Beyond the Beyond Delivery: AI Across the Value Stream

October 11, 2025 by philc

A follow up article and reflection on how AI amplifies the systems it enters, and why clarity in measurement and language defines its true impact.

4 min read

After reading Laura Tacho’s latest article, “What the 2025 DORA Report Means for Your AI Strategy,” published today by DX, I found myself nodding along from start to finish. Her analysis reinforces what many of us have been saying for the past year: AI doesn’t automatically improve your system; it amplifies whatever already exists within it.

If your system is healthy, AI accelerates learning, delivery, and improvement. If it’s fragmented or dysfunctional, AI will only expose that reality faster.

In my earlier and related article, “Beyond Delivery: Realizing AI’s Potential Across the Value Stream,” I explored this same theme, referencing Laura’s previous work and the DX Core Four research to show how AI’s true promise emerges when applied across the entire value stream, not just within delivery. Her new reflections build on that conversation beautifully, grounding it in DORA’s 2025 findings and placing even greater emphasis on what truly determines AI success: measurement, monitoring, and system health.

AI’s True Leverage Is in the System

What stands out in both discussions is that AI amplifies the system it enters.

Healthy systems, with strong engineering practices, small-batch work, solid source control, and active observability, see acceleration. Weak systems, where friction and inconsistency already exist, see those problems amplified.

That’s why measurement and feedback are the new leadership disciplines.

Organizations treating AI as a system-level investment, rather than a tool for individual productivity, are seeing the greatest impact. They aren’t asking “how many developers are using Copilot?” but instead “how is AI helping our teams improve outcomes across the value stream?”

DORA’s latest research validates that shift, focusing less on adoption rates and more on outcomes. It echoes a point Laura made and I emphasized in my own writing: AI’s advantage is proportional to the strength of your engineering system.

Why Clarity Still Matters

While I agree with nearly everything in Laura’s article, one nuance deserves attention, not as a critique, but as context.

DORA, DX Core 4, LinearB, and other Software Engineering Intelligence (SEI) platforms are not Value Stream Management (VSM) platforms. It measures the segment of the delivery lifecycle, create and release. However, true VSM spans the entire lifecycle: from idea to delivery and operation.

This distinction matters because where AI is applied should match where your bottlenecks exist.

If your constraint is upstream, in ideation or backlog management, and you only apply AI within development, you’re optimizing a stage that isn’t the problem.

Think of your value stream as four connected tanks of water: ideation, creation, release, and operation.

If the first tank (ideation) is blocked, making the water move faster in the second (creation) doesn’t improve throughput. You’re just circulating water in your own tank while everything above remains stuck.

That’s why AI should be applied where it can improve the overall flow, across the whole system, not just a single stage.

It’s also where clarity of language matters. Some Software Engineering Intelligence (SEI) platforms, including Laura’s organization, integrate DORA metrics within broader insights and occasionally describe their approach as VSM. From a marketing standpoint, that’s understandable; SEI platforms compete with full-scale VSM platforms, such as Planview Viz, which measure the entire value stream. However, it’s worth remembering that DORA and most SEI metrics represent one vital stage, not the entire system.

On Vendors, Neutrality, and Experience

I have deep respect for Laura and her organization’s work advancing how we measure and improve developer experience. Over the last four years, I’ve also established professional relationships with several of these platform providers, offering feedback and leadership perspectives to their teams as they evolve their products and strategies.

I share this because my perspective is grounded in firsthand experience, research, and conversations across the industry, not because of any endorsement. I’m not paid to promote any vendor. Those who know me are aware that I have my preferences, currently Planview Viz for Value Stream Management, as well as LinearB and the DX Core 4 for Software Engineering Intelligence and developer-experience insights.

Each offers unique value, but I’ve yet to see a single platform deliver a truly complete view across all stages, combining full system-level metrics and team sentiment data. Until that happens, I’ll continue to advocate for clarity of terms and how these solutions market themselves, and measurements that accurately reflect reality.

And to be fair, I haven’t kept up with every vendor’s latest releases, so I encourage anyone exploring these tools to do their own research and choose what best fits their organization’s context and maturity.

Closing Thought

Laura’s article is spot-on in identifying what really drives AI impact: monitoring, measuring, and managing the system it touches.

That’s the same theme at the heart of Beyond Delivery: that AI’s potential isn’t realized through automation alone, but through its ability to illuminate flow, reveal friction, and help teams improve faster than before.

When we describe our systems accurately, we focus on what truly matters, and that’s when AI stops being a tool for speed and becomes an accelerant for value across the entire system.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Tacho, Laura. “What the 2025 DORA Report Means for Your AI Strategy.” DX Newsletter, October 8, 2025.
    Available at: https://newsletter.getdx.com/p/2025-dora-report-means-for-your-ai-strategy
  • Clark, Phil. “Beyond Delivery: Realizing AI’s Potential Across the Value Stream.” Rethink Your Understanding, September 2025.
    Available at: https://rethinkyourunderstanding.com/2025/09/beyond-delivery-realizing-ais-potential-across-the-value-stream/
  • DORA Research Team. “2025 State of AI-Assisted Software Development (DORA Report).” Google Cloud / DORA, September 2025.
    Available at: https://cloud.google.com/devops/state-of-devops

Filed Under: Agile, AI, DevOps, Metrics, Product Delivery, Software Engineering, Value Stream Management

What Happens When We Eliminate the Agile Leader?

October 9, 2025 by philc

The hidden cost of removing the role that protects flow, team health, and continuous improvement

7 min read

Every few months, the “Agile is Dead” conversation surfaces in leadership meetings, LinkedIn threads, or hallway debates. Recently, I’ve been reflecting on it from two angles:

  • First, I’ve seen organizations under new leadership take very different paths; some thrive with dedicated Scrum Masters or Agile Delivery Manager roles, while others remove them and shift responsibilities to engineering managers and teams.
  • Second, I came across a LinkedIn post describing companies letting go of Scrum Masters and Agile coaches, not for financial reasons, but as a conscious redesign of how they deliver software.

Both perspectives reveal a more profound confusion. Many believe Agile itself is outdated; others assume that if Scrum changes, the role associated with it, the Scrum Master, should disappear too.

But are teams really outgrowing Agile?

Or are we simply misunderstanding the purpose of the Agile leader?

Agile Isn’t Dead, But It’s Often Misapplied

When people say “Agile is dead,” they’re rarely attacking its principles. Delivering in small batches, learning fast, and adapting based on feedback are still how modern teams succeed. What’s fading is the packaged version of Agile, the one sold through mass certifications, rigid frameworks, and transformation playbooks.

Much of the backlash comes from poor implementations. Consulting firms rolled out what they called “textbook Scrum,” blending practices from other frameworks, such as story points and user stories from Extreme Programming (XP), and applying them everywhere. Teams focused on sprints, standups, and rituals instead of learning and improvement.

Scrum was never meant to be rigid; it’s a lightweight framework for managing complexity. When treated as a checklist, it becomes “cargo-cult” Agile, copying rituals without purpose. When that fails, organizations often blame the framework, rather than the implementation.

That misunderstanding extends to the Scrum Master role itself. Many assume that dropping Scrum means dropping the Scrum Master. But the need for someone to coach, facilitate, and sustain continuous improvement doesn’t vanish when frameworks evolve.

Do We Still Need an Agile Leader?

Whether following Scrum or as organizations transition to Kanban or hybrid flow models, many are eliminating Agile leadership roles. Responsibilities once owned by a Scrum Master or Agile Coach are now:

  • absorbed by Engineering Managers,
  • distributed across team members, or
  • elevated to Program Management.

On paper, this looks efficient. In reality, it often creates a gap because no one is explicitly accountable for maintaining flow, team health, and continuous improvement.

The Role’s Evolution and Its Reputation

Over time, the Scrum Master evolved into roles such as Agile Coach, Agile Leader, or Agile Delivery Manager (ADM) leaders who:

  • coached flow and sustainability,
  • resolved cross-team dependencies,
  • championed experimentation and team health, and
  • used flow metrics to surface bottlenecks and team delivery performance.
  • connect delivery initiatives or epics with business outcomes.

These were not meeting schedulers. They were system stewards, enabling teams to deliver effectively and sustainably.

Unfortunately, the role’s reputation suffered as the industry scaled too fast. The explosion of two-day certification courses created an influx of “certified experts” with little experience. Many were placed in impossible positions, expected to transform organizations without the authority or mentorship to succeed. Some individuals grew into exceptional Agile leaders, while others struggled.

The uneven quality left leaders skeptical. That’s not a failure of the role itself, but a byproduct of how quickly Agile became commercialized.

When the Role Disappears (or Gets Folded Into Management)

In some organizations, the Agile leadership role has been absorbed by Engineering Managers. On paper, this simplifies accountability and structure. In practice, it creates new trade-offs:

  • Overload: Engineering Managers juggle hiring, technical design and strategy, people development, and implementation oversight. Adding Agile facilitation stretches them thin.
  • Loss of neutrality: It’s hard to be both coach and evaluator. Psychological safety and open reflection suffer.
  • Reduced focus: Good Agile leaders specialize in flow, metrics, and process improvement. Those responsibilities often fade when combined with other priorities.

I’m watching this shift happen in real time. In one organization that removed its Agile leaders, Engineering Managers now coordinate ceremonies and metrics while trying to sustain alignment. The administrative tasks are covered, but continuous improvement and team sentiment have slipped out of focus. There’s only so much one role can absorb before something important gives way.

These managers, once deeply technical and people-oriented, now find themselves stretched across too many competing responsibilities. It’s still early, but the question isn’t whether meetings happen; it’s whether performance, flow, and engagement can sustain without a separate role dedicated to nurturing them.

Redistribution to Program Management

Some of the higher-level coaching and metrics work has moved into Program Management. Many program managers at this organization hold Scrum Master certifications and act as advisors to Engineering Managers, while maintaining flow metrics and ensuring value stream visibility.

It’s a reasonable bridge, but scale limits its impact. A single program manager may support six to eight teams, focusing only on the most critical issues. The broader discipline of continuous improvement, including reviewing flow data, addressing bottlenecks, or mapping value streams, risks fading when no one on the team is closely involved.

Distributing or Rotating Responsibilities

Some teams attempt to share Agile responsibilities: rotating facilitators, distributing meeting ownership, or collectively tracking metrics. It’s a well-intentioned model that works for mature, stable teams, but it has limits.

  • Frequent rotation breaks continuity and learning.
  • Coaching depth is lost when no one develops mastery.
  • Under delivery pressure, improvement tasks fall to the bottom of the list.

Distributed ownership can work in bursts, but it rarely sustains long-term improvement. Someone still needs to own the system, even if the title is gone.

Leadership Mindsets Define Success

Whether an organization retains or removes Agile leaders often comes down to mindset.

Execution-First Leadership (Command & Control):

  • Believes delivery can be managed through structure and accountability.
  • Sees facilitation and coaching as overhead.
  • Accepts distributed ownership as “good enough.”

Systems-Enabling Leadership (Servant / Flow):

  • Believes facilitation and improvement require focus and skill.
  • Invests in Agile leaders to strengthen flow and collaboration.
  • Sees distributed responsibility as a step, not a destination.

Neither model is inherently wrong; they reflect different views on how improvement happens. But experience shows a clear trade-off: when continuous improvement is one of many responsibilities, it often becomes no one’s priority. A dedicated Agile leader keeps that focus alive; an overloaded manager rarely can for long. The key is designing a system where improvement has space to breathe, not just another task on an already full plate.

The Myth of the Unicorn

When organizations integrate Agile leadership into engineering management or product management, they often create “unicorns”-individuals expected to possess both deep core skills and be effective leaders, delivery owners, and process coaches simultaneously.

Those who can do this well are rare, and even they struggle with constant task-switching across competing priorities. When these high performers leave, the organization loses more than a person; it loses context, flow awareness, and continuity. Replacing them is difficult; few candidates in the market combine such a broad mix of technical, leadership, and coaching skills.

Scrum, Kanban, and What Doesn’t Change

Practices evolve. Scrum remains widely used, but many teams operate in Kanban or hybrid systems. The shift to continuous delivery doesn’t eliminate the need for Agile leadership; if anything, it heightens it.

As work becomes more distributed and complex, teams still need a steward of flow and feedback. Frameworks differ; however, the function that enables collaboration and systemic improvement remains the same.

The Path Forward: Protect the Capability, Not the Title

Instead of asking, “Should we bring Scrum Masters back?” leaders should be asking a more fundamental question:

Who in our organization is responsible for enabling collaboration, removing impediments, promoting improvement, maintaining team health, and driving systemic learning?

If the answer is “no one,” it doesn’t matter what you call the role; you have a gap.

If the answer is “partially someone (rotated or shared),” acknowledge the compromise, the diffusion of ownership, and a loss of focus, and revisit it as the organization matures.

Agile will continue to exist with or without a dedicated Scrum Master or Agile Leader. Frameworks evolve, but the principles, small batches, fast feedback, and empowered teams remain the same. Having a dedicated role strengthens a team’s ability to apply those principles consistently. Without one, Agile doesn’t vanish, but performance and improvement discipline often do.

The point isn’t about losing Agile practices; it’s about the risk of losing stewardship. Without it, the habits that once drove learning and improvement fade, and teams can inevitably slide back toward the rigid, hierarchical models Agile set out to change.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


Related Reading

If this topic resonated with you, you may find these articles valuable as complementary perspectives:

  • From Scrum Master to Agile Delivery Manager: Evolution in the Age of Flow
    Explores how the Agile leadership role evolved beyond facilitation to become a strategic driver of flow and measurable outcomes.
  • Why Cutting Agile Leadership Hurts Teams More Than It Saves
    Examines the long-term cultural and performance costs organizations face when eliminating roles dedicated to continuous improvement.
  • Mindsets That Shape Software Delivery Team Structures
    Highlights how leadership philosophies, command-and-control versus systems-enabling, determine whether teams thrive or stall.

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

From Two Pizzas to One: How AI Reshapes Dev Teams

October 2, 2025 by philc

Exploring how AI could reshape software teams, smaller pods, stronger guardrails, and the balance between autonomy and oversight.

7 min read

For more than two decades, Jeff Bezos’s “two-pizza team” rule has been shorthand for small, effective software teams: a group should be small enough that two pizzas can feed them, typically about 5–10 people. The principle is simple: fewer people means fewer communication lines, less overhead, and faster progress. The math illustrates this well: 10 people create 45 communication channels, while four people create just six. Smaller groups spend less time coordinating, which often leads to faster outcomes.

This article was sparked by a comment at this year’s Enterprise Technology Leadership Summit. A presenter suggested that AI could soon reshape how we think about team size. That got me wondering: what would “one-pizza teams” actually look like if applied to enterprise-grade systems where resilience, compliance, and scalability are non-negotiable?

The Hype: “Do We Even Need Developers?”

In recent months, I’ve heard product leaders speculate that AI might make developers optional. One senior product manager even suggested, half-seriously, that “we may not need developers at all, since AI can write code directly.” On the surface, that sounds bold. But in reality, it reflects limited hands-on experience with the current tools. Generating a demo or prototype with AI is one thing; releasing code into a production system, supporting high-volume, transactional workloads with rollback, observability, and compliance requirements, is another. It’s easy to imagine that AI can replace developers entirely until you’ve lived through the complexity of maintaining enterprise-grade systems.

I’ve also sat in conversations with CTOs and VPs excited about the economics. AI tools, after all, look cheap compared to fully burdened human salaries. On a spreadsheet, reducing teams of 8–12 engineers down to one or two may appear to unlock massive savings. But here again, prototypes aren’t production, and what looks good in theory may not play out in practice.

The Reality Check

The real question isn’t whether AI eliminates developers, it’s how it changes the balance between humans, tools, and team structure. While cost pressures may tempt leaders to shrink teams, the more compelling opportunity may be to accelerate growth and innovation. AI could enable organizations to field more small teams in parallel, modernize multiple subdomains simultaneously, deliver features faster, and pivot quickly to outpace their competitors.

Rather than a story of headcount reduction, one-pizza teams could become a story of capacity expansion, with more teams and a broader scope, all while maintaining the same or slightly fewer people. But this is still, to some extent, a crystal ball exercise. None of us can predict with certainty what teams will look like in three, five, or ten years. What seems possible today is that AI enables smaller pods to take on more responsibility, provided we approach this shift with caution and discipline.

Why AI Might Enable Smaller Teams

AI’s value in this context comes from how it alters the scope of work for each developer.

Hygiene at scale. Practices that teams often defer, such as tests, documentation, release notes, and refactors, can be automated or continuously maintained by AI. Quality could become less negotiable and more baked into the process.

Coordination by contract. AI works best when given context. PR templates, paved roads, and CI/CD guardrails provide part of that. But so do rule files, lightweight markdown contracts such as cursor_rules.md or claude.md that encode expectations for test coverage, security practices, naming conventions, and architecture. These files give AI the boundaries it needs to generate code that aligns with team standards. Over time, this could transform AI from a generic assistant into a domain-aware teammate.

Broader scope. With boilerplate and retrieval handled by AI, a small pod might own more of the vertical stack, from design to deployment, without fragmenting responsibilities across multiple groups.

Reduced overhead. Acting as a shared memory and on-demand research partner, AI can minimize the need for lengthy meetings or additional specialists. Coordination doesn’t disappear, but some of the lower-value overhead could shrink.

From Efficiency to Autonomy

The promise isn’t simply in productivity gains per person; it may lie in autonomy. AI could provide small pods with enough context and tooling to operate independently. This autonomy might enable organizations to spin up more one-pizza teams, each capable of covering a subdomain, reducing technical debt, delivering features, or running experiments. Instead of doing the same work with fewer people, companies might do more work in parallel with the same resources.

How Roles Could Evolve

If smaller teams become the norm, roles may shift rather than disappear.

  • Product Managers could prototype with AI before engineers write code, run quick user tests, and even handle minor fixes.
  • Designers might use AI to generate layouts while focusing more on UX research, customer insights, and accessibility.
  • Engineers may be pushed up the value chain, from writing boilerplate to acting as architects, integrators, and AI orchestrators. This creates a potential career pipeline challenge: if AI handles repetitive tasks, how will junior engineers gain the depth needed to become tomorrow’s architects?
  • QA specialists can transition from manual testing to test strategy, utilizing AI to accelerate execution while directing human effort toward edge cases.
  • New AI-native roles, such as prompt engineers, context engineers, AI QA, or solutions architects, may emerge to make AI trustworthy and enterprise-aligned.

In some cases, the traditional boundaries between product, design, and engineering could blur further into “ProdDev” pods, teams where everyone contributes to both the vision and the execution.

The Enterprise Reality

Startups and greenfield projects may thrive with tiny pods or even solo founders leveraging AI. But in enterprise environments, complexity doesn’t vanish. Legacy systems, compliance, uptime, and production support continue to require human oversight.

One-pizza pods might be possible in select domains, but scaling them down won’t be simple. Where it does happen, success may depend on making two human hats explicit:

  • Tech Lead – guiding design reviews, threat modeling, performance budgets, and validating AI output.
  • Domain Architect – enforcing domain boundaries, compliance, and alignment with golden paths.

Even then, these roles rely on shared scaffolding:

  • Production Engineering / SRE  -managing incidents, SLOs, rollbacks, and noise reduction.
  • Platform Teams – providing paved roads like IaC modules, service templates, observability baselines, and policy-as-code.

The point isn’t that enterprises can instantly shrink to one-pizza teams, but that AI might create the conditions to experiment in specific contexts. Human judgment, architecture, and institutional scaffolding remain essential.

Guardrails and Automation in Practice

For smaller pods to succeed, standards need to be non-negotiable. AI may help enforce them, but humans must guide the judgment.

Dual-gate reviews. AI can run mechanical checks, while humans approve architecture and domain impacts.

Evidence over opinion. PRs should include artifacts, tests, docs, and performance metrics, so reviews are about validating evidence, not debating opinions.

Security by default. Automated scans block unsafe merges.

Rollback first. Automation should default to rollback, with humans approving fixing forward.

Toil quotas. Reducing repetitive ops work quarter by quarter keeps small teams sustainable.

Beyond CI, AI can also shape continuous delivery by optimizing pipelines, enforcing deployment policies, validating changes against staging telemetry, and even self-healing during failures.

What’s Real vs. Wishful Thinking (2025)

AI is helping, but unevenly. Gains emerge when organizations re-architect workflows end-to-end, rather than layering AI on top of existing processes.

Quality and security remain human-critical. Studies suggest a high percentage of AI-generated code carries vulnerabilities. AI may accelerate output, but without human checks, it risks accelerating flaws.

AI can make reviews more efficient by summarizing diffs and flagging issues, but final approval still requires human judgment on architecture and risk.

And production expectations haven’t changed. A 99.99% uptime commitment still allows only 15 minutes of downtime per quarter. Even if AI can help remediate, humans remain accountable for those calls.

Practitioner feedback is also worth noting. In conversations with developers and business users of AI, most of whom are still in their first year of adoption, the consensus is that productivity gains are often inflated. Some tasks are faster with AI, while others require more time to manage context. Most people view AI as a paired teammate, rather than a fully autonomous agent that can build almost everything in one or two shots.

Challenges to Consider

Workforce disruption. If AI handles more routine work, some organizations may feel pressure to reduce the scope of specific roles. Whether that turns into cuts or an opportunity to reskill may depend on leadership choices.

Mentorship and pipeline. Junior engineers once learned by doing the work AI now accelerates. Without intentional design of new learning paths, we may risk a gap in the next generation of senior engineers.

Over-reliance. AI is powerful but not infallible. It can hallucinate, generate insecure code, or miss subtle regressions. Shrinking teams too far might leave too few human eyes on critical paths.

A Practical Checklist

  • Product risk: 99.95%+ SLOs or regulated data? Don’t shrink yet.
  • Pager noise: <10 actionable alerts/week and rollback proven? Consider shrinking.
  • Bus factor: ≥3 engineers can ship/release independently? Consider shrinking.
  • AI Maturity: Are AI Checks and PR Evidence Mandatory? Consider shrinking.
  • Toil trend: Is toil tracked and trending down? Consider shrinking.

Bottom Line

AI may make one-pizza teams possible, but only if automation carries the repetitive workload, humans maintain judgmental oversight, and guardrails ensure standards. Done thoughtfully, smaller pods don’t mean scarcity; they can mean focus.

And when organizations multiply these pods across a portfolio, the outcome might not just be sustaining velocity but accelerating it: more features, faster modernization, shorter feedback loops, and quicker pivots against disruption.

This is the story of AI in team structure, not doing the same with less, but doing more with the same.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

Beyond Delivery: Realizing AI’s Potential Across the Value Stream

September 29, 2025 by philc

Moving beyond AI-assisted delivery to achieve measurable, system-wide impact through value stream visibility and flow metrics.

10 min read

At the 2025 Engineering Leadership Tech Summit, Mik Kersten previewed ideas from his upcoming book, Output to Outcome: An Operating Model for the Age of AI. He reminded us of a truth often overlooked in digital transformation: Agile delivery teams are not the constraint in most cases.

Kersten broke out the software value stream into four phases: Ideate, Create, Release, Operate, and showed how the majority of waste and delay happens outside of coding. One slide in particular resonated with me. Agile teams accounted for just 8% of overall cycle time. The real delays sat at the bookends: 48% in ideation, slowed by funding models, approvals, and reprioritizations; and 44% in release, bogged down by dependencies, technical debt, and manual processes.

This framing raises a critical question: if we only apply AI to coding or delivery automation, are we just accelerating the smallest part of the system while leaving the actual bottlenecks untouched?

AI in the Delivery Stage: Where the Industry Stands

In a recent DX Engineering Enablement podcast, Laura Tacho and her co-hosts discussed the role of AI in enhancing developer productivity. Much of their discussion centered on the Create and Release stages: code review, testing, deployment, and CI/CD automation. Laura made a compelling point about moving beyond “single-player mode”:

“AI is an accelerant best when it’s used at an organizational level, not when we just put a license in the hands of an individual… Platform teams can own a lot of the metaphorical AI headcount and apply it in a horizontal way across the organization.”

Centralizing AI adoption and applying it across delivery produces leverage, rather than leaving individuals to experiment in isolation. But even this framing is still too narrow.

The Missing Piece: AI Adoption Across the Entire Stream

The real opportunity is to treat AI not as a tool for delivery efficiency, but as a partner across the entire value stream. That means embedding AI into every stage and measuring it with system-level visibility, not just delivery dashboards.

This is why I value platforms that integrate tool data across the whole stream, system metrics and visibility dashboards, rather than tools that stop at delivery.

Of course, full-stream visibility platforms are more expensive, and in many organizations, only R&D teams are driving efforts to improve flow. As I’ve argued in past writing on SEI vs. VSM, context matters: sometimes the right starting point is SEI, when delivery is the bottleneck. But when delays span ideation, funding, or release, only a VSM platform can expose and address systemic waste.

AI opportunities across the stream:

  • Ideation (48%) – Accelerate customer research, business case drafting, and approvals; surface queues and wait states in one view.
  • Create (8%) – Apply AI to coding, reviews, and testing, but tie it to system outcomes, not vanity speedups.
  • Release (44%) – Automate compliance, dependency checks, and integration work to reduce handoff delays.
  • Operate – Target AI at KTLO and incident patterns, feeding learnings back into product strategy.

When AI is applied across the whole system (value stream), we can ask a better question: not “How fast can we deploy?” but “How much can we compress idea-to-value?” Moving from 180 days to 90 days or less becomes possible when AI supports marketing, product, design, engineering, release, and support, and when the entire system is measured, not just delivery.

VSM vs. Delivery-Only Tooling

This is where tooling distinctions matter. DX Core 4 and SEI platforms, such as LinearB, focus on delivery (Create and Release), which is valuable but limited to one stage of the system. Planview Viz and other VSM platforms, by contrast, elevate visibility across the entire value stream.

Delivery-only dashboards may show how fast you’re coding or deploying. But Value Stream Management reveals the actual business constraints, often upstream in funding, prioritization, PoCs, and customer research, or downstream in handoffs and release.

Without that lens, AI risks becoming just another tool that speeds up developers without improving the system.

AI as a Force Multiplier in Metrics Platforms

AI embedded directly into metrics platforms can change the game. In a recent Product Thinking podcast, John Cutler observed:

“We talked to a company that’s spending maybe $4 million in staff hours per quarter around just people spending time copying and prepping for all these types of things… All they’re doing is creating a dashboard, pulling together a lot of information, and re-contextualizing it so it looks the same in a meeting. I think that’s just a massive opportunity for AI to be able to help with that kind of stuff.”

This hidden cost of operational overhead is real. Leaders and teams waste countless hours aggregating and reformatting data into slides or dashboards to make it consumable.

Embedding AI into VSM or SEI platforms removes that friction. Instead of duplicating effort, AI can generate dashboards, surface insights, and even facilitate the conversations those dashboards are meant to support.

This is more of a cultural shift than a productivity gain. Less slide-building, more strategy. Less reformatting, more alignment. And metrics conversations that finally scale beyond the few who have time to stitch the story together manually.

The ROI Lens: From Adoption to Efficiency

The ROI of AI adoption is no longer a question of whether to invest; that decision is now a given. As Atlassian’s 2025 AI Collaboration Report shows, daily AI usage has doubled in the past year, and executives overwhelmingly cite efficiency as the top benefit.

The differentiator now is how efficiently you manage AI’s cost, just as the cloud debate shifted from whether to adopt to how well you could optimize spend.

But efficiency cannot be measured by isolated productivity gains. Atlassian found that while many organizations report time savings, only 4% have seen transformational improvements in efficiency, innovation, or work quality.

The companies breaking through embed AI across the system: building connected knowledge bases, enabling AI-powered coordination, and making AI part of every team.

That’s why the ROI lens must be grounded in flow metrics. If AI adoption is working, we should see:

  • Flow time shrink
  • Flow efficiency rises
  • Waste reduction is visible in the stream
  • Flow velocity accelerates (more items delivered at the same or lower cost)
  • Flow distribution rebalance (AI resolving technical debt and reducing escaped defects)
  • Flow load stabilization (AI absorbing repetitive work and signaling overload early)

VSM system-wide platforms make these signals visible, showing whether AI is accelerating the idea-to-value process across the entire stream, not just helping individuals move faster.

Bringing It Full Circle

In recent conversations with a large organization’s CTO, and again with Laura while exploring how DX and Anthropic measure AI, I kept returning to the same point: we already have the metrics to know if AI is making an impact. AI is now just another option or tool in our toolbox, and its effect is reflected in flow metrics, change failure rates, and developer experience feedback.

We are also beginning to adopt DX AI Framework metrics, which are structured around Utilization, Impact, and Cost, aligning with the metrics that companies like Dropbox and Atlassian currently measure. But even as we incorporate these, we continue to lean on system-level flow metrics as the foundation. They are what reveal whether AI adoption is truly improving delivery across the value stream, from ideation to production.

Leadership Lessons from McKinsey and DORA

This perspective also echoes Ruba Borno, VP at AWS, in a recent McKinsey interview on leading through AI disruption. She noted that while AI’s pace of innovation is unprecedented, only 20–30% of proofs of concept reach production. The difference comes from data readiness, security guardrails, leadership-driven change management, and partnerships.

And the proof is tangible: Canva, working with AWS Bedrock, moved from the idea of Canva Code to a launched product in just 12 weeks. That’s precisely the kind of idea-to-operation acceleration we need to measure. It shows that when AI is applied systematically, you don’t just make delivery faster; you also make the entire flow from concept to customer measurably shorter.

The 2025 DORA State of AI-Assisted Software Development Report reinforces this reality. Their cluster analysis revealed that only the top performers, approximately 40% of teams, currently experience AI-enhanced throughput without compromising stability. For the rest, AI often amplifies existing dysfunctions, increasing change failure rates or generating additional waste.

Leadership Implications: What the DORA Findings Mean for You

The 2025 DORA report indicates that only the most mature teams currently benefit from AI-assisted coding. For everyone else, AI mostly amplifies existing problems. What does that mean if you’re leading R&D?

1. Don’t skip adoption, but don’t roll it out unthinkingly.

AI is here to stay, but it’s not a silver bullet. Start small with teams that already have strong engineering practices, and use them to build responsible adoption patterns before scaling.

2. Treat AI as an amplifier of your system.

If your flow is healthy, AI accelerates it. If your flow is dysfunctional, AI makes it worse. Think of it like a turbocharger: great when the engine and brakes are tuned, dangerous when they’re not.

3. Use metrics to know if AI is helping or hurting.

  • Flow time, efficiency, and distribution should improve.
  • DORA’s stability metrics (such as change failure rate) should remain steady or decline.
  • Developer sentiment should show growing confidence, not frustration.

4. Fix bottlenecks in parallel.

AI won’t remove waste; it will expose it faster. Eliminate approval delays, reduce tech debt, and streamline release processes so AI acceleration actually creates value.

5. Value of the message:

The lesson isn’t “don’t adopt AI.” It’s: adopt responsibly, measure outcomes, and strengthen your system so that AI becomes an accelerant, not a liability.

Ruba’s message, reinforced by both McKinsey and DORA, leads to the same conclusion: AI adoption succeeds when it’s measured at the system level, tied to business outcomes, and championed by leadership. Without that visibility, organizations risk accelerating pilots that never translate into value.

Conclusion: Beyond Delivery

The conversation about AI in software delivery is maturing. It’s no longer just about adoption, but about managing costs and system impact. AI must be measured not only by its utilization but also by how it improves flow efficiency, compresses the idea-to-value cycle, and reduces systemic waste.

The organizations that will win in this new era are those that:

  • Embed AI across the entire value stream, not just in delivery.
  • Measure ROI through flow metrics that connect improvements to business outcomes.
  • Manage AI’s cost as carefully as they once managed cloud costs.
  • Lead with visibility, change management, and partnerships to scale adoption.

And critically, successful AI integration requires more than deploying tools. It requires thoughtful measurement, training, and best practices for implementation in software engineering to sustain quality while ensuring that training and strategy are applied consistently across all roles, from product and design to operations and support. Only then can organizations ensure that the promise of acceleration improves outcomes without undermining the collaboration and sustainability that long-term software success depends on.

In short: AI in delivery is helpful, but AI across the value stream is transformational.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  • Atlassian. (2025). How leading companies unlock AI ROI: The AI Collaboration Index. Atlassian Teamwork Lab. Retrieved from https://atlassianblog.wpengine.com/wp-content/uploads/2025/09/atlassian-ai-collaboration-report-2025.pdf
  • Borno, R., & Yee, L. (2025, September). How to lead through the AI disruption. McKinsey & Company, At the Edge Podcast (transcript). Retrieved from https://www.mckinsey.com
  • Cutler, J. (2025, September 23). Product Thinking: Freeing Teams from Operational Overload [Podcast]. Episode 247. Apple Podcasts. https://podcasts.apple.com/us/podcast/product-thinking/id1550800132?i=1000728179156
  • DX, Engineering Enablement Podcast. (2025). Episode excerpt on AI’s role in developer productivity and platform teams. DX. (Quoted in article from Laura Tacho). Episode 90, https://podcasts.apple.com/us/podcast/the-evolving-role-of-devprod-teams-in-the-ai-era/id1619140476?i=1000728563938
  • DX (Developer Experience). (2025). Measuring AI code assistants and agents: The DX AI Measurement Framework™. DX Research, co-authored by Abi Noda and Laura Tacho. Retrieved from https://getdx.com (Image: DX AI Measurement Framework).
  • Kersten, M. (2025). Output to Outcome: An Operating Model for the Age of AI (forthcoming). Presentation at the 2025 Engineering Leadership Tech Summit.
  • Google Cloud & DORA (DevOps Research and Assessment). (2025). 2025 State of AI-Assisted Software Development Report. Retrieved from https://cloud.google.com/devops/state-of-devops

Further Reading

For readers interested in exploring AI ideas further, here are a few related pieces from my earlier writing:

  • AI in Software Delivery: Targeting the System, Not Just the Code
  • AI Is Improving Software Engineering. But It’s Only One Piece of the System
  • Leading Through the AI Hype in R&D
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact