• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Software Engineering

Why Value Stream Management and the Product Operating Model Matter (and What Comes Next)

November 5, 2025 by philc

6 min read

I had the opportunity to revisit my January article and refine its key points for a recent Flowtopia.io post.

Seeing the Why Behind the Frameworks

In 2021, as part of our evolving Agile transformation, I introduced Value Stream Management (VSM) and later championed the Product Operating Model (POM). Yet I never clearly articulated why these practices mattered.

Looking back, we had already been moving toward a product-oriented model long before naming it. Cross-functional product teams operated organically but without shared governance. When capacity pressures mounted, priorities blurred and inefficiencies surfaced, showing that alignment and communication of purpose are as essential as the frameworks themselves.

Inside my own organization, alignment lagged. Technology advanced rapidly, and engineers and Agile Leaders embraced flow metrics and value-stream thinking, while the product function remained loosely engaged. Without clear accountability, the message fractured: technology optimized for flow; product managed for capacity. The gap limited our ability to realize the frameworks’ potential.

This imbalance is common. Most organizations face more work than they have capacity for, making prioritization and a focus on outcomes essential. VSM and the Product Operating Model address this directly, aligning teams, optimizing workflows, and ensuring that every hour of capacity contributes to real value.

“Adopting frameworks isn’t enough; leaders must over communicate their purpose.”

The Turning Point: When Efficiency Isn’t Enough

Every transformation reaches a moment of truth. You automate more, deploy faster, and report higher output, yet business leaders still ask, “How are our investments being utilized?”

The disconnect isn’t about effort or talent, but about visibility. Most digital organizations struggle to clearly understand how knowledge work flows or how investments in Scrum, Kanban, DevOps, automation, and now AI impact performance. Teams, in turn, can’t see how their daily work ties to customer or business outcomes.

That’s where VSM and POM intersect, two complementary frameworks that connect flow, alignment, and outcomes. Both emerged from the same realization: efficiency alone is insufficient. Without linking how value flows to what outcomes it creates, organizations risk optimizing for motion instead of progress. Sustaining expertise and funding across a product’s lifespan, rather than through short-term projects, produces better results.

From Projects to Products

For decades, technology was treated as a cost center measured by utilization and velocity. Projects were funded, staffed, delivered, and disbanded. The product model reversed that logic.

By aligning long-lived teams around customer and business outcomes, organizations create ownership and continuity. Each team becomes responsible not only for delivery, quality, and security but also for the impact of its outcomes.

In our case, context switching dropped. Developers embedded in single domains became accountable for both flow and customer success. Priorities shifted faster, decisions stayed within teams, and purpose became clearer. When people see how their work creates value, metrics start to matter.

Context Is Everything

“There is no one-size-fits-all approach to transformation. The true power of frameworks like VSM and POM lies in their flexibility to serve as blueprints rather than rigid rules.”

Adoption succeeds only when frameworks align with an organization’s structure, culture, and leadership context. Models fail not by design but by misapplication. That’s why effective organizations start by seeing their system before changing it.

Value Stream Mapping provides visibility, showing how work moves, where it slows, and how efficiently it reaches customers. Flow Engineering practices, such as Outcome Maps, Current-State Maps, and Dependency Maps, enable leaders to visualize how work, teams, and dependencies interact. These visualizations reveal friction, conflicting priorities, and hidden handoffs that delay the realization of value.

“Visibility creates alignment. Alignment establishes the foundation for improvement.”

The 2024 Project to Product State of the Industry Report confirms that elite organizations don’t just implement frameworks; they adapt them to fit their structure and customer context. That adaptability turns adoption into transformation.

Flow and Realization: The Two Sides of Value

Every delivery system operates in two dimensions:

Flow – how efficiently value moves.

Realization – how effectively that value produces business or customer outcomes.

Most organizations measure one and overlook the other or treat them as separate conversations.

Flow metrics, including Flow Time, Velocity, Efficiency, Distribution, and DORA metrics, reveal system health but not its impact.

Realization metrics, retention, revenue contribution, and time-to-market, show outcomes but not efficiency.

“Flow transforms effort into movement; realization transforms movement into impact.”

The 2024 Project to Product Report found that fewer than 15% of Organizations integrate flow metrics with business outcomes. Yet those that do so outperform their peers on both speed and customer satisfaction.

Measuring Across Layers

Metrics operate across three layers:

• System Layer: Flow & DORA metrics reveal delivery efficiency.

• Team Layer: Developer Experience (DX) and sentiment show team health.

• Business Layer: Realization metrics link work to outcomes.

Connecting these layers turns measurement into meaning and prevents metric theater, reporting what’s easy instead of what matters.

Leadership and Structure: The Missing Link

Even the best frameworks fail without a shift in leadership. Adopting VSM and POM means transitioning from a command-and-control approach to one of clarity, from managing tasks to managing systems.

Delegation and empowerment become strategic levers. Leaders define and communicate outcomes and boundaries; teams own delivery, quality, and learning within them. Guided by data-driven feedback, they experiment and improve.

The best teams treat flow and realization as continuous feedback loops, a living system that evolves with every release.

Governance through transparency replaces micromanagement. Dashboards enable leaders to coach, rather than control, by focusing on flow, bottlenecks, and opportunities. Empowerment is a shared ownership of outcomes.

A mature value-stream culture recognizes that leadership doesn’t disappear, but evolves. The leader’s job is to design the system where great work happens, not be the system itself.

What Comes Next: Amplification Through AI

Organizations often ask, “What’s next?”

The answer is amplification, using technology, data, and AI to accelerate insight and learning.

AI doesn’t change your system; it magnifies it. If your processes are slow, AI exposes that faster. If your system is healthy, it enhances visibility, identifies bottlenecks, and predicts where investment yields the highest return.

The future of AI in VSM is about augmenting human judgment, not replacing it. Intelligent automation links flow metrics to outcomes, detects deviations early, and surfaces recommendations that leaders can act on in real-time. This evolution expands the leader’s role once again, from observer to orchestrator of improvement.

Bridging Technology and Business Value

My ongoing focus is strengthening the connection between technology execution and business outcomes, a lesson shaped by feedback from an executive 360-degree assessment: “You should focus more on business results as a technology leader.”

That insight was right. We transformed from a monolithic architecture and waterfall process into a world-class Agile, microservices-based organization, yet we hadn’t consistently shown how that transformation delivered measurable business results.

To close that gap, we’re developing tools that make value visible:

• Value Stream Templates to connect work with business objectives.

• Initiative & Epic Definitions emphasizing outcomes and dependencies.

• Team-Level OKRs tied to measurable business priorities.

• Knowledge Hub Updates highlighting outcomes over outputs.

The 2024 Project to Product Report found that organizations that consistently link delivery, metrics, and business outcomes outperform their peers in terms of agility, profitability, and retention.

“The answers reveal whether your organization is optimizing activity or enabling value.”

The Real Transformation

When combined, VSM and POM unlock a higher level of capability. They teach leaders to see how work flows, how people collaborate, and how outcomes drive real impact.

When you see work as a flow of value rather than a measure of effort, you stop managing activity and start leading outcomes.

That’s the actual transformation, shifting focus from what we deliver to what difference it makes.

“The time to act is now. Let’s lead purposefully, ensuring our teams deliver meaningful, measurable value in 2026 and beyond.”

Transformation is never solitary; shared understanding across our industry is where alignment begins.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. The 2024 Project to Product State of the Industry Report, Planview, https://info.planview.com/project-to-product-state-of-the-industry-_report_vsm_en_reg.html
  2. Why Value Stream Management and the Product Operating Model Matter, Rethink Your Understanding, https://rethinkyourunderstanding.com/2025/01/why-vsm-and-the-product-operating-model-matter/

Filed Under: Agile, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

The Price of Alignment

October 21, 2025 by philc

How Well-Intentioned Integration Can Undermine Modern Architecture and Team Autonomy

8 min read

Summary (TL;DR)

This story illustrates what happens when a company designed around microservices and autonomous teams is integrated into a larger organization built on monolithic systems and centralized management.

LegacyTech (the acquirer) operates with large, consistent teams managed through a fixed span-of-control model, with one engineering manager overseeing eight software engineers being the target.

AgileWorks (the acquired company) evolved over a decade into small, cross-functional, domain-aligned microservice teams, each independently deployable and self-managing.

During the final stages of integration, AgileWorks’ structure was eventually forced into LegacyTech’s model in the name of “consistency.” What seemed efficient on paper can lead to architectural regression, slower delivery, blurred ownership, and increased coordination overhead.

The misunderstanding, such as questioning why AgileWorks had “so many SKUs or products”, stemmed from not recognizing that each SKU or product represented a bounded context and independently deployable domain, the hallmark of microservice design.

The result: a collision between two successful systems optimized for different goals, independence vs. uniformity.

The lesson is clear:

If architecture and structure drift apart, performance can erode. You can’t integrate autonomy into hierarchy without changing both.

Before enforcing structural consistency, leaders must understand the organizational DNA that made the acquired system successful. Otherwise, the very advantages that justified the acquisition are the first to disappear.

Opening Reflection

Having been part of numerous acquisitions and technical due diligence efforts throughout my career, I’ve observed familiar patterns emerge when one company merges with another. Leadership often evaluates the architecture, technology stack, team skills, and organizational structure to assess fit and value.

Having observed these patterns across many organizations’ transformation journeys, I’ve seen how integration decisions can either reinforce or erode what made an acquisition valuable. But in a recent acquisition I observed, something different unfolded; the integration process risks undoing the very capabilities that had made the acquired organization successful. Its architecture and team design, once aligned for speed and autonomy, were being reshaped in ways that constrained both.

A large, established technology company, let’s call it LegacyTech, acquired a smaller, fast-moving company, AgileWorks.

LegacyTech is a successful and stable company with products that have stood the test of time, built on an architecture that is still largely monolithic. It has begun investing in modernization and technology transformation. However, LegacyTech’s system continued to prioritize control, reliability, and predictability.

Its teams are large, their processes consistent, and their management model straightforward: one engineering manager for every six to ten software engineers (eight being the target), one product manager per team, and one operating rhythm that keeps everything aligned.

AgileWorks, by contrast, had spent nearly a decade transforming itself. Through persistent experimentation, modernization, learning, and automation, it evolved from a monolithic architecture into a predominantly microservice-based one. Its teams were small, domain-aligned, and self-managing, each owning its own domain or set of subdomains, data, and deployments. They could move quickly, deliver independently, and continuously improve without waiting for managerial approvals.

AgileWorks maintained a span-of-control target of five direct reports per Engineering Manager. However, EMs did not oversee delivery teams or manage the engineers within them; their direct reports were distributed across multiple teams. Each delivery team instead had a dedicated Agile Leader responsible for its performance and health.

By the time the acquisition happened, AgileWorks had become everything many companies aspire to be: autonomous, modern, and fast. And that’s when LegacyTech stepped in to acquire them.

The Integration Challenge

After the acquisition, LegacyTech began integrating the smaller company’s people and processes into its larger framework. The goal was clear: consistency.

Leadership required all teams, whether legacy or newly acquired, to adhere to the same structure, reporting lines, career development framework, and management model.

So, AgileWorks’ small, autonomous domain teams were expected to merge into LegacyTech’s span-of-control structure. Each team was restructured to fit a standardized span-of-control model, regardless of its domain boundaries or architectural design.

On paper, it looked rational, efficient, uniform, and fair. But structure doesn’t just describe how people work; it shapes how people work.

When Structure Shapes the System

There’s a principle known as Conway’s Law, which says:

“Organizations design systems that mirror their communication structures.”

At AgileWorks, small, autonomous teams built and deployed small, autonomous services.

At LegacyTech, large, coordinated teams built large, coordinated systems.

When the two merged, the system began to change, not through code, but through structure. Teams that once released independently now had to coordinate across domains, with engineering managers juggling competing priorities across multiple queues.

Over time, the architecture could start to mirror the new structure, slower, more coupled, and increasingly dependent on synchronization rather than flow.

This wasn’t failure but cause and effect, the predictable outcome of two systems built to operate by different rules.

Why the “Too Many SKUs” Question Misses the Point

As integration continued, a question surfaced from LegacyTech’s leaders:

“Why does AgileWorks have so many SKUs?”

From the outside, it appeared excessive or confusing. From the inside, it reflected transformation and maturity. Each ‘SKU’ represented a bounded context, a product or domain with its own stack, team, and flow of value.

That’s the natural outcome of a microservice architecture. As Dave Farley describes it, a microservice is small, focused on one task, aligned with a bounded context, autonomous, independently deployable, and loosely coupled.

The independence of those services is the entire point. It enables small, decoupled teams to work in parallel, which DORA research repeatedly shows as a leading predictor of delivery performance:

“Smaller, autonomous teams build better software faster.”

So, what looked to LegacyTech like “too many SKUs” was actually a reflection of architectural evolution, the visible signature of independence and scale.

The Hidden Cost of Consistency

Consistency is valuable. It brings clarity, career alignment, predictability, and a sense of order that scales well.

However, when consistency overlooks architectural context, it can quietly undermine the very advantages an acquisition was intended to bring.

By restructuring domain-aligned teams under broader spans of control, LegacyTech risked recentralizing what had been purposefully decentralized.

From the outside, things still appear aligned. Inside, work begins to queue, decisions bottleneck, and delivery can erode. What seems to be an efficient integration can quietly become architectural regression.

Leadership Context and Legacy Mindsets

During integration discussions, LegacyTech’s leaders emphasized the need for consistency. One executive put it plainly:

“We can’t redesign all of our teams to look like the company we acquired, so we’ll redesign theirs to look like ours.” From their perspective, it made sense. LegacyTech had twenty or more teams; AgileWorks had eleven.

Why redesign twenty when you can standardize eleven? And the twenty monolithic teams couldn’t fit within AgileWorks’ model. LegacyTech’s teams operated on a one-to-one structure, with one engineering manager paired with one product manager, both of whom oversaw a single, unified codebase.

Under the hybrid model introduced by AgileWorks’ leader, one engineering manager now manages two or three domains, each with its own Product Manager. The design preserved domain alignment but blurred accountability and added coordination overhead.

Engineering Managers are now also held accountable for delivery and team performance as part of this shift, which removed the Agile Leader role from teams. It was a compromise, an attempt to preserve domain alignment while satisfying span-of-control requirements, with eight software engineers reporting to one manager. However, this consolidation also increased cognitive and operational load for managers, blurred accountability, and made the model harder for others to understand.

Beneath these decisions lay something deeper: a legacy mindset forged in years of success managing monolithic systems. Their definition of success came from environments where control, coordination, and uniformity produced predictability.

They hadn’t yet led in a distributed, microservices-based domain architecture, one where success depends on small, autonomous teams empowered to make local decisions and deploy independently, the very capabilities that drive speed and resilience.

It wasn’t resistance. It was familiarity. Leadership was applying what had always worked, unaware that the system underneath was fundamentally different. And that’s the real tension in most integrations: the experience that once guaranteed success can become the very thing that slows adaptation.

The Diligence Gap

This mismatch usually starts long before integration.

Traditional due diligence focuses on technology assets, the stack, scalability, and delivery practices. What it often overlooks is organizational architecture, how teams are structured to support that system.

AgileWorks’ success wasn’t just in its code. It was in how its teams worked: decoupled, autonomous, and aligned with their architecture.

When that alignment is ignored, integration can dismantle the very system that made the technology valuable in the first place.

You can acquire the product, but if you don’t understand the system that built it, you risk changing both, usually not for the better.

The Crossroads

As integration continued, the senior leader at AgileWorks consolidated software engineers by their existing domains under a single engineering manager. It was a hybrid attempt, a way to meet span-of-control targets while keeping domain boundaries intact.

Yet this structure puzzled many senior leaders, who struggled to understand how autonomy could coexist with standardization.The organization soon faced a pivotal choice:

Should it preserve the hybrid model, allowing domain teams to remain somewhat autonomous under shared leadership, or fully merge everything into one standardized structure?

Each path carried clear trade-offs. Respecting autonomy required leaders to adopt new management patterns and rethink the Engineering Manager role itself. Full consolidation simplified reporting, standardized roles, but re-coupled systems that were purposefully independent.

Whichever path they chose, one thing became clear:

The future depended not on process, but on leadership understanding.

Because structure follows mindset, and mindset follows experience.

Closing Reflection

This isn’t a story about who was right or wrong, but instead a story about what happens when architecture and structure drift apart.

When a large, monolithic company acquires a smaller, microservice-architected one, success depends not only on integrating tools and people, but also on integrating understanding.

LegacyTech didn’t fail; it followed its playbook. AgileWorks didn’t resist; it followed its principles. Both were right in their own context.

The lesson for any organization is simple: Before enforcing structural consistency, understand the architecture and operating model you’re inheriting.

Because when you acquire an architecture, you also acquire the organizational DNA that built it. Change one without understanding the other, and you may find the system changes on its own, just not in the direction you intended.

Change one without understanding the other, and you may find the system changes on its own, just not in the direction you intended.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

Beyond the Beyond Delivery: AI Across the Value Stream

October 11, 2025 by philc

A follow up article and reflection on how AI amplifies the systems it enters, and why clarity in measurement and language defines its true impact.

4 min read

After reading Laura Tacho’s latest article, “What the 2025 DORA Report Means for Your AI Strategy,” published today by DX, I found myself nodding along from start to finish. Her analysis reinforces what many of us have been saying for the past year: AI doesn’t automatically improve your system; it amplifies whatever already exists within it.

If your system is healthy, AI accelerates learning, delivery, and improvement. If it’s fragmented or dysfunctional, AI will only expose that reality faster.

In my earlier and related article, “Beyond Delivery: Realizing AI’s Potential Across the Value Stream,” I explored this same theme, referencing Laura’s previous work and the DX Core Four research to show how AI’s true promise emerges when applied across the entire value stream, not just within delivery. Her new reflections build on that conversation beautifully, grounding it in DORA’s 2025 findings and placing even greater emphasis on what truly determines AI success: measurement, monitoring, and system health.

AI’s True Leverage Is in the System

What stands out in both discussions is that AI amplifies the system it enters.

Healthy systems, with strong engineering practices, small-batch work, solid source control, and active observability, see acceleration. Weak systems, where friction and inconsistency already exist, see those problems amplified.

That’s why measurement and feedback are the new leadership disciplines.

Organizations treating AI as a system-level investment, rather than a tool for individual productivity, are seeing the greatest impact. They aren’t asking “how many developers are using Copilot?” but instead “how is AI helping our teams improve outcomes across the value stream?”

DORA’s latest research validates that shift, focusing less on adoption rates and more on outcomes. It echoes a point Laura made and I emphasized in my own writing: AI’s advantage is proportional to the strength of your engineering system.

Why Clarity Still Matters

While I agree with nearly everything in Laura’s article, one nuance deserves attention, not as a critique, but as context.

DORA, DX Core 4, LinearB, and other Software Engineering Intelligence (SEI) platforms are not Value Stream Management (VSM) platforms. It measures the segment of the delivery lifecycle, create and release. However, true VSM spans the entire lifecycle: from idea to delivery and operation.

This distinction matters because where AI is applied should match where your bottlenecks exist.

If your constraint is upstream, in ideation or backlog management, and you only apply AI within development, you’re optimizing a stage that isn’t the problem.

Think of your value stream as four connected tanks of water: ideation, creation, release, and operation.

If the first tank (ideation) is blocked, making the water move faster in the second (creation) doesn’t improve throughput. You’re just circulating water in your own tank while everything above remains stuck.

That’s why AI should be applied where it can improve the overall flow, across the whole system, not just a single stage.

It’s also where clarity of language matters. Some Software Engineering Intelligence (SEI) platforms, including Laura’s organization, integrate DORA metrics within broader insights and occasionally describe their approach as VSM. From a marketing standpoint, that’s understandable; SEI platforms compete with full-scale VSM platforms, such as Planview Viz, which measure the entire value stream. However, it’s worth remembering that DORA and most SEI metrics represent one vital stage, not the entire system.

On Vendors, Neutrality, and Experience

I have deep respect for Laura and her organization’s work advancing how we measure and improve developer experience. Over the last four years, I’ve also established professional relationships with several of these platform providers, offering feedback and leadership perspectives to their teams as they evolve their products and strategies.

I share this because my perspective is grounded in firsthand experience, research, and conversations across the industry, not because of any endorsement. I’m not paid to promote any vendor. Those who know me are aware that I have my preferences, currently Planview Viz for Value Stream Management, as well as LinearB and the DX Core 4 for Software Engineering Intelligence and developer-experience insights.

Each offers unique value, but I’ve yet to see a single platform deliver a truly complete view across all stages, combining full system-level metrics and team sentiment data. Until that happens, I’ll continue to advocate for clarity of terms and how these solutions market themselves, and measurements that accurately reflect reality.

And to be fair, I haven’t kept up with every vendor’s latest releases, so I encourage anyone exploring these tools to do their own research and choose what best fits their organization’s context and maturity.

Closing Thought

Laura’s article is spot-on in identifying what really drives AI impact: monitoring, measuring, and managing the system it touches.

That’s the same theme at the heart of Beyond Delivery: that AI’s potential isn’t realized through automation alone, but through its ability to illuminate flow, reveal friction, and help teams improve faster than before.

When we describe our systems accurately, we focus on what truly matters, and that’s when AI stops being a tool for speed and becomes an accelerant for value across the entire system.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Tacho, Laura. “What the 2025 DORA Report Means for Your AI Strategy.” DX Newsletter, October 8, 2025.
    Available at: https://newsletter.getdx.com/p/2025-dora-report-means-for-your-ai-strategy
  • Clark, Phil. “Beyond Delivery: Realizing AI’s Potential Across the Value Stream.” Rethink Your Understanding, September 2025.
    Available at: https://rethinkyourunderstanding.com/2025/09/beyond-delivery-realizing-ais-potential-across-the-value-stream/
  • DORA Research Team. “2025 State of AI-Assisted Software Development (DORA Report).” Google Cloud / DORA, September 2025.
    Available at: https://cloud.google.com/devops/state-of-devops

Filed Under: Agile, AI, DevOps, Metrics, Product Delivery, Software Engineering, Value Stream Management

What Happens When We Eliminate the Agile Leader?

October 9, 2025 by philc

The hidden cost of removing the role that protects flow, team health, and continuous improvement

7 min read

Every few months, the “Agile is Dead” conversation surfaces in leadership meetings, LinkedIn threads, or hallway debates. Recently, I’ve been reflecting on it from two angles:

  • First, I’ve seen organizations under new leadership take very different paths; some thrive with dedicated Scrum Masters or Agile Delivery Manager roles, while others remove them and shift responsibilities to engineering managers and teams.
  • Second, I came across a LinkedIn post describing companies letting go of Scrum Masters and Agile coaches, not for financial reasons, but as a conscious redesign of how they deliver software.

Both perspectives reveal a more profound confusion. Many believe Agile itself is outdated; others assume that if Scrum changes, the role associated with it, the Scrum Master, should disappear too.

But are teams really outgrowing Agile?

Or are we simply misunderstanding the purpose of the Agile leader?

Agile Isn’t Dead, But It’s Often Misapplied

When people say “Agile is dead,” they’re rarely attacking its principles. Delivering in small batches, learning fast, and adapting based on feedback are still how modern teams succeed. What’s fading is the packaged version of Agile, the one sold through mass certifications, rigid frameworks, and transformation playbooks.

Much of the backlash comes from poor implementations. Consulting firms rolled out what they called “textbook Scrum,” blending practices from other frameworks, such as story points and user stories from Extreme Programming (XP), and applying them everywhere. Teams focused on sprints, standups, and rituals instead of learning and improvement.

Scrum was never meant to be rigid; it’s a lightweight framework for managing complexity. When treated as a checklist, it becomes “cargo-cult” Agile, copying rituals without purpose. When that fails, organizations often blame the framework, rather than the implementation.

That misunderstanding extends to the Scrum Master role itself. Many assume that dropping Scrum means dropping the Scrum Master. But the need for someone to coach, facilitate, and sustain continuous improvement doesn’t vanish when frameworks evolve.

Do We Still Need an Agile Leader?

Whether following Scrum or as organizations transition to Kanban or hybrid flow models, many are eliminating Agile leadership roles. Responsibilities once owned by a Scrum Master or Agile Coach are now:

  • absorbed by Engineering Managers,
  • distributed across team members, or
  • elevated to Program Management.

On paper, this looks efficient. In reality, it often creates a gap because no one is explicitly accountable for maintaining flow, team health, and continuous improvement.

The Role’s Evolution and Its Reputation

Over time, the Scrum Master evolved into roles such as Agile Coach, Agile Leader, or Agile Delivery Manager (ADM) leaders who:

  • coached flow and sustainability,
  • resolved cross-team dependencies,
  • championed experimentation and team health, and
  • used flow metrics to surface bottlenecks and team delivery performance.
  • connect delivery initiatives or epics with business outcomes.

These were not meeting schedulers. They were system stewards, enabling teams to deliver effectively and sustainably.

Unfortunately, the role’s reputation suffered as the industry scaled too fast. The explosion of two-day certification courses created an influx of “certified experts” with little experience. Many were placed in impossible positions, expected to transform organizations without the authority or mentorship to succeed. Some individuals grew into exceptional Agile leaders, while others struggled.

The uneven quality left leaders skeptical. That’s not a failure of the role itself, but a byproduct of how quickly Agile became commercialized.

When the Role Disappears (or Gets Folded Into Management)

In some organizations, the Agile leadership role has been absorbed by Engineering Managers. On paper, this simplifies accountability and structure. In practice, it creates new trade-offs:

  • Overload: Engineering Managers juggle hiring, technical design and strategy, people development, and implementation oversight. Adding Agile facilitation stretches them thin.
  • Loss of neutrality: It’s hard to be both coach and evaluator. Psychological safety and open reflection suffer.
  • Reduced focus: Good Agile leaders specialize in flow, metrics, and process improvement. Those responsibilities often fade when combined with other priorities.

I’m watching this shift happen in real time. In one organization that removed its Agile leaders, Engineering Managers now coordinate ceremonies and metrics while trying to sustain alignment. The administrative tasks are covered, but continuous improvement and team sentiment have slipped out of focus. There’s only so much one role can absorb before something important gives way.

These managers, once deeply technical and people-oriented, now find themselves stretched across too many competing responsibilities. It’s still early, but the question isn’t whether meetings happen; it’s whether performance, flow, and engagement can sustain without a separate role dedicated to nurturing them.

Redistribution to Program Management

Some of the higher-level coaching and metrics work has moved into Program Management. Many program managers at this organization hold Scrum Master certifications and act as advisors to Engineering Managers, while maintaining flow metrics and ensuring value stream visibility.

It’s a reasonable bridge, but scale limits its impact. A single program manager may support six to eight teams, focusing only on the most critical issues. The broader discipline of continuous improvement, including reviewing flow data, addressing bottlenecks, or mapping value streams, risks fading when no one on the team is closely involved.

Distributing or Rotating Responsibilities

Some teams attempt to share Agile responsibilities: rotating facilitators, distributing meeting ownership, or collectively tracking metrics. It’s a well-intentioned model that works for mature, stable teams, but it has limits.

  • Frequent rotation breaks continuity and learning.
  • Coaching depth is lost when no one develops mastery.
  • Under delivery pressure, improvement tasks fall to the bottom of the list.

Distributed ownership can work in bursts, but it rarely sustains long-term improvement. Someone still needs to own the system, even if the title is gone.

Leadership Mindsets Define Success

Whether an organization retains or removes Agile leaders often comes down to mindset.

Execution-First Leadership (Command & Control):

  • Believes delivery can be managed through structure and accountability.
  • Sees facilitation and coaching as overhead.
  • Accepts distributed ownership as “good enough.”

Systems-Enabling Leadership (Servant / Flow):

  • Believes facilitation and improvement require focus and skill.
  • Invests in Agile leaders to strengthen flow and collaboration.
  • Sees distributed responsibility as a step, not a destination.

Neither model is inherently wrong; they reflect different views on how improvement happens. But experience shows a clear trade-off: when continuous improvement is one of many responsibilities, it often becomes no one’s priority. A dedicated Agile leader keeps that focus alive; an overloaded manager rarely can for long. The key is designing a system where improvement has space to breathe, not just another task on an already full plate.

The Myth of the Unicorn

When organizations integrate Agile leadership into engineering management or product management, they often create “unicorns”-individuals expected to possess both deep core skills and be effective leaders, delivery owners, and process coaches simultaneously.

Those who can do this well are rare, and even they struggle with constant task-switching across competing priorities. When these high performers leave, the organization loses more than a person; it loses context, flow awareness, and continuity. Replacing them is difficult; few candidates in the market combine such a broad mix of technical, leadership, and coaching skills.

Scrum, Kanban, and What Doesn’t Change

Practices evolve. Scrum remains widely used, but many teams operate in Kanban or hybrid systems. The shift to continuous delivery doesn’t eliminate the need for Agile leadership; if anything, it heightens it.

As work becomes more distributed and complex, teams still need a steward of flow and feedback. Frameworks differ; however, the function that enables collaboration and systemic improvement remains the same.

The Path Forward: Protect the Capability, Not the Title

Instead of asking, “Should we bring Scrum Masters back?” leaders should be asking a more fundamental question:

Who in our organization is responsible for enabling collaboration, removing impediments, promoting improvement, maintaining team health, and driving systemic learning?

If the answer is “no one,” it doesn’t matter what you call the role; you have a gap.

If the answer is “partially someone (rotated or shared),” acknowledge the compromise, the diffusion of ownership, and a loss of focus, and revisit it as the organization matures.

Agile will continue to exist with or without a dedicated Scrum Master or Agile Leader. Frameworks evolve, but the principles, small batches, fast feedback, and empowered teams remain the same. Having a dedicated role strengthens a team’s ability to apply those principles consistently. Without one, Agile doesn’t vanish, but performance and improvement discipline often do.

The point isn’t about losing Agile practices; it’s about the risk of losing stewardship. Without it, the habits that once drove learning and improvement fade, and teams can inevitably slide back toward the rigid, hierarchical models Agile set out to change.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


Related Reading

If this topic resonated with you, you may find these articles valuable as complementary perspectives:

  • From Scrum Master to Agile Delivery Manager: Evolution in the Age of Flow
    Explores how the Agile leadership role evolved beyond facilitation to become a strategic driver of flow and measurable outcomes.
  • Why Cutting Agile Leadership Hurts Teams More Than It Saves
    Examines the long-term cultural and performance costs organizations face when eliminating roles dedicated to continuous improvement.
  • Mindsets That Shape Software Delivery Team Structures
    Highlights how leadership philosophies, command-and-control versus systems-enabling, determine whether teams thrive or stall.

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

From Two Pizzas to One: How AI Reshapes Dev Teams

October 2, 2025 by philc

Exploring how AI could reshape software teams, smaller pods, stronger guardrails, and the balance between autonomy and oversight.

7 min read

For more than two decades, Jeff Bezos’s “two-pizza team” rule has been shorthand for small, effective software teams: a group should be small enough that two pizzas can feed them, typically about 5–10 people. The principle is simple: fewer people means fewer communication lines, less overhead, and faster progress. The math illustrates this well: 10 people create 45 communication channels, while four people create just six. Smaller groups spend less time coordinating, which often leads to faster outcomes.

This article was sparked by a comment at this year’s Enterprise Technology Leadership Summit. A presenter suggested that AI could soon reshape how we think about team size. That got me wondering: what would “one-pizza teams” actually look like if applied to enterprise-grade systems where resilience, compliance, and scalability are non-negotiable?

The Hype: “Do We Even Need Developers?”

In recent months, I’ve heard product leaders speculate that AI might make developers optional. One senior product manager even suggested, half-seriously, that “we may not need developers at all, since AI can write code directly.” On the surface, that sounds bold. But in reality, it reflects limited hands-on experience with the current tools. Generating a demo or prototype with AI is one thing; releasing code into a production system, supporting high-volume, transactional workloads with rollback, observability, and compliance requirements, is another. It’s easy to imagine that AI can replace developers entirely until you’ve lived through the complexity of maintaining enterprise-grade systems.

I’ve also sat in conversations with CTOs and VPs excited about the economics. AI tools, after all, look cheap compared to fully burdened human salaries. On a spreadsheet, reducing teams of 8–12 engineers down to one or two may appear to unlock massive savings. But here again, prototypes aren’t production, and what looks good in theory may not play out in practice.

The Reality Check

The real question isn’t whether AI eliminates developers, it’s how it changes the balance between humans, tools, and team structure. While cost pressures may tempt leaders to shrink teams, the more compelling opportunity may be to accelerate growth and innovation. AI could enable organizations to field more small teams in parallel, modernize multiple subdomains simultaneously, deliver features faster, and pivot quickly to outpace their competitors.

Rather than a story of headcount reduction, one-pizza teams could become a story of capacity expansion, with more teams and a broader scope, all while maintaining the same or slightly fewer people. But this is still, to some extent, a crystal ball exercise. None of us can predict with certainty what teams will look like in three, five, or ten years. What seems possible today is that AI enables smaller pods to take on more responsibility, provided we approach this shift with caution and discipline.

Why AI Might Enable Smaller Teams

AI’s value in this context comes from how it alters the scope of work for each developer.

Hygiene at scale. Practices that teams often defer, such as tests, documentation, release notes, and refactors, can be automated or continuously maintained by AI. Quality could become less negotiable and more baked into the process.

Coordination by contract. AI works best when given context. PR templates, paved roads, and CI/CD guardrails provide part of that. But so do rule files, lightweight markdown contracts such as cursor_rules.md or claude.md that encode expectations for test coverage, security practices, naming conventions, and architecture. These files give AI the boundaries it needs to generate code that aligns with team standards. Over time, this could transform AI from a generic assistant into a domain-aware teammate.

Broader scope. With boilerplate and retrieval handled by AI, a small pod might own more of the vertical stack, from design to deployment, without fragmenting responsibilities across multiple groups.

Reduced overhead. Acting as a shared memory and on-demand research partner, AI can minimize the need for lengthy meetings or additional specialists. Coordination doesn’t disappear, but some of the lower-value overhead could shrink.

From Efficiency to Autonomy

The promise isn’t simply in productivity gains per person; it may lie in autonomy. AI could provide small pods with enough context and tooling to operate independently. This autonomy might enable organizations to spin up more one-pizza teams, each capable of covering a subdomain, reducing technical debt, delivering features, or running experiments. Instead of doing the same work with fewer people, companies might do more work in parallel with the same resources.

How Roles Could Evolve

If smaller teams become the norm, roles may shift rather than disappear.

  • Product Managers could prototype with AI before engineers write code, run quick user tests, and even handle minor fixes.
  • Designers might use AI to generate layouts while focusing more on UX research, customer insights, and accessibility.
  • Engineers may be pushed up the value chain, from writing boilerplate to acting as architects, integrators, and AI orchestrators. This creates a potential career pipeline challenge: if AI handles repetitive tasks, how will junior engineers gain the depth needed to become tomorrow’s architects?
  • QA specialists can transition from manual testing to test strategy, utilizing AI to accelerate execution while directing human effort toward edge cases.
  • New AI-native roles, such as prompt engineers, context engineers, AI QA, or solutions architects, may emerge to make AI trustworthy and enterprise-aligned.

In some cases, the traditional boundaries between product, design, and engineering could blur further into “ProdDev” pods, teams where everyone contributes to both the vision and the execution.

The Enterprise Reality

Startups and greenfield projects may thrive with tiny pods or even solo founders leveraging AI. But in enterprise environments, complexity doesn’t vanish. Legacy systems, compliance, uptime, and production support continue to require human oversight.

One-pizza pods might be possible in select domains, but scaling them down won’t be simple. Where it does happen, success may depend on making two human hats explicit:

  • Tech Lead – guiding design reviews, threat modeling, performance budgets, and validating AI output.
  • Domain Architect – enforcing domain boundaries, compliance, and alignment with golden paths.

Even then, these roles rely on shared scaffolding:

  • Production Engineering / SRE  -managing incidents, SLOs, rollbacks, and noise reduction.
  • Platform Teams – providing paved roads like IaC modules, service templates, observability baselines, and policy-as-code.

The point isn’t that enterprises can instantly shrink to one-pizza teams, but that AI might create the conditions to experiment in specific contexts. Human judgment, architecture, and institutional scaffolding remain essential.

Guardrails and Automation in Practice

For smaller pods to succeed, standards need to be non-negotiable. AI may help enforce them, but humans must guide the judgment.

Dual-gate reviews. AI can run mechanical checks, while humans approve architecture and domain impacts.

Evidence over opinion. PRs should include artifacts, tests, docs, and performance metrics, so reviews are about validating evidence, not debating opinions.

Security by default. Automated scans block unsafe merges.

Rollback first. Automation should default to rollback, with humans approving fixing forward.

Toil quotas. Reducing repetitive ops work quarter by quarter keeps small teams sustainable.

Beyond CI, AI can also shape continuous delivery by optimizing pipelines, enforcing deployment policies, validating changes against staging telemetry, and even self-healing during failures.

What’s Real vs. Wishful Thinking (2025)

AI is helping, but unevenly. Gains emerge when organizations re-architect workflows end-to-end, rather than layering AI on top of existing processes.

Quality and security remain human-critical. Studies suggest a high percentage of AI-generated code carries vulnerabilities. AI may accelerate output, but without human checks, it risks accelerating flaws.

AI can make reviews more efficient by summarizing diffs and flagging issues, but final approval still requires human judgment on architecture and risk.

And production expectations haven’t changed. A 99.99% uptime commitment still allows only 15 minutes of downtime per quarter. Even if AI can help remediate, humans remain accountable for those calls.

Practitioner feedback is also worth noting. In conversations with developers and business users of AI, most of whom are still in their first year of adoption, the consensus is that productivity gains are often inflated. Some tasks are faster with AI, while others require more time to manage context. Most people view AI as a paired teammate, rather than a fully autonomous agent that can build almost everything in one or two shots.

Challenges to Consider

Workforce disruption. If AI handles more routine work, some organizations may feel pressure to reduce the scope of specific roles. Whether that turns into cuts or an opportunity to reskill may depend on leadership choices.

Mentorship and pipeline. Junior engineers once learned by doing the work AI now accelerates. Without intentional design of new learning paths, we may risk a gap in the next generation of senior engineers.

Over-reliance. AI is powerful but not infallible. It can hallucinate, generate insecure code, or miss subtle regressions. Shrinking teams too far might leave too few human eyes on critical paths.

A Practical Checklist

  • Product risk: 99.95%+ SLOs or regulated data? Don’t shrink yet.
  • Pager noise: <10 actionable alerts/week and rollback proven? Consider shrinking.
  • Bus factor: ≥3 engineers can ship/release independently? Consider shrinking.
  • AI Maturity: Are AI Checks and PR Evidence Mandatory? Consider shrinking.
  • Toil trend: Is toil tracked and trending down? Consider shrinking.

Bottom Line

AI may make one-pizza teams possible, but only if automation carries the repetitive workload, humans maintain judgmental oversight, and guardrails ensure standards. Done thoughtfully, smaller pods don’t mean scarcity; they can mean focus.

And when organizations multiply these pods across a portfolio, the outcome might not just be sustaining velocity but accelerating it: more features, faster modernization, shorter feedback loops, and quicker pivots against disruption.

This is the story of AI in team structure, not doing the same with less, but doing more with the same.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

Beyond Delivery: Realizing AI’s Potential Across the Value Stream

September 29, 2025 by philc

Moving beyond AI-assisted delivery to achieve measurable, system-wide impact through value stream visibility and flow metrics.

10 min read

At the 2025 Engineering Leadership Tech Summit, Mik Kersten previewed ideas from his upcoming book, Output to Outcome: An Operating Model for the Age of AI. He reminded us of a truth often overlooked in digital transformation: Agile delivery teams are not the constraint in most cases.

Kersten broke out the software value stream into four phases: Ideate, Create, Release, Operate, and showed how the majority of waste and delay happens outside of coding. One slide in particular resonated with me. Agile teams accounted for just 8% of overall cycle time. The real delays sat at the bookends: 48% in ideation, slowed by funding models, approvals, and reprioritizations; and 44% in release, bogged down by dependencies, technical debt, and manual processes.

This framing raises a critical question: if we only apply AI to coding or delivery automation, are we just accelerating the smallest part of the system while leaving the actual bottlenecks untouched?

AI in the Delivery Stage: Where the Industry Stands

In a recent DX Engineering Enablement podcast, Laura Tacho and her co-hosts discussed the role of AI in enhancing developer productivity. Much of their discussion centered on the Create and Release stages: code review, testing, deployment, and CI/CD automation. Laura made a compelling point about moving beyond “single-player mode”:

“AI is an accelerant best when it’s used at an organizational level, not when we just put a license in the hands of an individual… Platform teams can own a lot of the metaphorical AI headcount and apply it in a horizontal way across the organization.”

Centralizing AI adoption and applying it across delivery produces leverage, rather than leaving individuals to experiment in isolation. But even this framing is still too narrow.

The Missing Piece: AI Adoption Across the Entire Stream

The real opportunity is to treat AI not as a tool for delivery efficiency, but as a partner across the entire value stream. That means embedding AI into every stage and measuring it with system-level visibility, not just delivery dashboards.

This is why I value platforms that integrate tool data across the whole stream, system metrics and visibility dashboards, rather than tools that stop at delivery.

Of course, full-stream visibility platforms are more expensive, and in many organizations, only R&D teams are driving efforts to improve flow. As I’ve argued in past writing on SEI vs. VSM, context matters: sometimes the right starting point is SEI, when delivery is the bottleneck. But when delays span ideation, funding, or release, only a VSM platform can expose and address systemic waste.

AI opportunities across the stream:

  • Ideation (48%) – Accelerate customer research, business case drafting, and approvals; surface queues and wait states in one view.
  • Create (8%) – Apply AI to coding, reviews, and testing, but tie it to system outcomes, not vanity speedups.
  • Release (44%) – Automate compliance, dependency checks, and integration work to reduce handoff delays.
  • Operate – Target AI at KTLO and incident patterns, feeding learnings back into product strategy.

When AI is applied across the whole system (value stream), we can ask a better question: not “How fast can we deploy?” but “How much can we compress idea-to-value?” Moving from 180 days to 90 days or less becomes possible when AI supports marketing, product, design, engineering, release, and support, and when the entire system is measured, not just delivery.

VSM vs. Delivery-Only Tooling

This is where tooling distinctions matter. DX Core 4 and SEI platforms, such as LinearB, focus on delivery (Create and Release), which is valuable but limited to one stage of the system. Planview Viz and other VSM platforms, by contrast, elevate visibility across the entire value stream.

Delivery-only dashboards may show how fast you’re coding or deploying. But Value Stream Management reveals the actual business constraints, often upstream in funding, prioritization, PoCs, and customer research, or downstream in handoffs and release.

Without that lens, AI risks becoming just another tool that speeds up developers without improving the system.

AI as a Force Multiplier in Metrics Platforms

AI embedded directly into metrics platforms can change the game. In a recent Product Thinking podcast, John Cutler observed:

“We talked to a company that’s spending maybe $4 million in staff hours per quarter around just people spending time copying and prepping for all these types of things… All they’re doing is creating a dashboard, pulling together a lot of information, and re-contextualizing it so it looks the same in a meeting. I think that’s just a massive opportunity for AI to be able to help with that kind of stuff.”

This hidden cost of operational overhead is real. Leaders and teams waste countless hours aggregating and reformatting data into slides or dashboards to make it consumable.

Embedding AI into VSM or SEI platforms removes that friction. Instead of duplicating effort, AI can generate dashboards, surface insights, and even facilitate the conversations those dashboards are meant to support.

This is more of a cultural shift than a productivity gain. Less slide-building, more strategy. Less reformatting, more alignment. And metrics conversations that finally scale beyond the few who have time to stitch the story together manually.

The ROI Lens: From Adoption to Efficiency

The ROI of AI adoption is no longer a question of whether to invest; that decision is now a given. As Atlassian’s 2025 AI Collaboration Report shows, daily AI usage has doubled in the past year, and executives overwhelmingly cite efficiency as the top benefit.

The differentiator now is how efficiently you manage AI’s cost, just as the cloud debate shifted from whether to adopt to how well you could optimize spend.

But efficiency cannot be measured by isolated productivity gains. Atlassian found that while many organizations report time savings, only 4% have seen transformational improvements in efficiency, innovation, or work quality.

The companies breaking through embed AI across the system: building connected knowledge bases, enabling AI-powered coordination, and making AI part of every team.

That’s why the ROI lens must be grounded in flow metrics. If AI adoption is working, we should see:

  • Flow time shrink
  • Flow efficiency rises
  • Waste reduction is visible in the stream
  • Flow velocity accelerates (more items delivered at the same or lower cost)
  • Flow distribution rebalance (AI resolving technical debt and reducing escaped defects)
  • Flow load stabilization (AI absorbing repetitive work and signaling overload early)

VSM system-wide platforms make these signals visible, showing whether AI is accelerating the idea-to-value process across the entire stream, not just helping individuals move faster.

Bringing It Full Circle

In recent conversations with a large organization’s CTO, and again with Laura while exploring how DX and Anthropic measure AI, I kept returning to the same point: we already have the metrics to know if AI is making an impact. AI is now just another option or tool in our toolbox, and its effect is reflected in flow metrics, change failure rates, and developer experience feedback.

We are also beginning to adopt DX AI Framework metrics, which are structured around Utilization, Impact, and Cost, aligning with the metrics that companies like Dropbox and Atlassian currently measure. But even as we incorporate these, we continue to lean on system-level flow metrics as the foundation. They are what reveal whether AI adoption is truly improving delivery across the value stream, from ideation to production.

Leadership Lessons from McKinsey and DORA

This perspective also echoes Ruba Borno, VP at AWS, in a recent McKinsey interview on leading through AI disruption. She noted that while AI’s pace of innovation is unprecedented, only 20–30% of proofs of concept reach production. The difference comes from data readiness, security guardrails, leadership-driven change management, and partnerships.

And the proof is tangible: Canva, working with AWS Bedrock, moved from the idea of Canva Code to a launched product in just 12 weeks. That’s precisely the kind of idea-to-operation acceleration we need to measure. It shows that when AI is applied systematically, you don’t just make delivery faster; you also make the entire flow from concept to customer measurably shorter.

The 2025 DORA State of AI-Assisted Software Development Report reinforces this reality. Their cluster analysis revealed that only the top performers, approximately 40% of teams, currently experience AI-enhanced throughput without compromising stability. For the rest, AI often amplifies existing dysfunctions, increasing change failure rates or generating additional waste.

Leadership Implications: What the DORA Findings Mean for You

The 2025 DORA report indicates that only the most mature teams currently benefit from AI-assisted coding. For everyone else, AI mostly amplifies existing problems. What does that mean if you’re leading R&D?

1. Don’t skip adoption, but don’t roll it out unthinkingly.

AI is here to stay, but it’s not a silver bullet. Start small with teams that already have strong engineering practices, and use them to build responsible adoption patterns before scaling.

2. Treat AI as an amplifier of your system.

If your flow is healthy, AI accelerates it. If your flow is dysfunctional, AI makes it worse. Think of it like a turbocharger: great when the engine and brakes are tuned, dangerous when they’re not.

3. Use metrics to know if AI is helping or hurting.

  • Flow time, efficiency, and distribution should improve.
  • DORA’s stability metrics (such as change failure rate) should remain steady or decline.
  • Developer sentiment should show growing confidence, not frustration.

4. Fix bottlenecks in parallel.

AI won’t remove waste; it will expose it faster. Eliminate approval delays, reduce tech debt, and streamline release processes so AI acceleration actually creates value.

5. Value of the message:

The lesson isn’t “don’t adopt AI.” It’s: adopt responsibly, measure outcomes, and strengthen your system so that AI becomes an accelerant, not a liability.

Ruba’s message, reinforced by both McKinsey and DORA, leads to the same conclusion: AI adoption succeeds when it’s measured at the system level, tied to business outcomes, and championed by leadership. Without that visibility, organizations risk accelerating pilots that never translate into value.

Conclusion: Beyond Delivery

The conversation about AI in software delivery is maturing. It’s no longer just about adoption, but about managing costs and system impact. AI must be measured not only by its utilization but also by how it improves flow efficiency, compresses the idea-to-value cycle, and reduces systemic waste.

The organizations that will win in this new era are those that:

  • Embed AI across the entire value stream, not just in delivery.
  • Measure ROI through flow metrics that connect improvements to business outcomes.
  • Manage AI’s cost as carefully as they once managed cloud costs.
  • Lead with visibility, change management, and partnerships to scale adoption.

And critically, successful AI integration requires more than deploying tools. It requires thoughtful measurement, training, and best practices for implementation in software engineering to sustain quality while ensuring that training and strategy are applied consistently across all roles, from product and design to operations and support. Only then can organizations ensure that the promise of acceleration improves outcomes without undermining the collaboration and sustainability that long-term software success depends on.

In short: AI in delivery is helpful, but AI across the value stream is transformational.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  • Atlassian. (2025). How leading companies unlock AI ROI: The AI Collaboration Index. Atlassian Teamwork Lab. Retrieved from https://atlassianblog.wpengine.com/wp-content/uploads/2025/09/atlassian-ai-collaboration-report-2025.pdf
  • Borno, R., & Yee, L. (2025, September). How to lead through the AI disruption. McKinsey & Company, At the Edge Podcast (transcript). Retrieved from https://www.mckinsey.com
  • Cutler, J. (2025, September 23). Product Thinking: Freeing Teams from Operational Overload [Podcast]. Episode 247. Apple Podcasts. https://podcasts.apple.com/us/podcast/product-thinking/id1550800132?i=1000728179156
  • DX, Engineering Enablement Podcast. (2025). Episode excerpt on AI’s role in developer productivity and platform teams. DX. (Quoted in article from Laura Tacho). Episode 90, https://podcasts.apple.com/us/podcast/the-evolving-role-of-devprod-teams-in-the-ai-era/id1619140476?i=1000728563938
  • DX (Developer Experience). (2025). Measuring AI code assistants and agents: The DX AI Measurement Framework™. DX Research, co-authored by Abi Noda and Laura Tacho. Retrieved from https://getdx.com (Image: DX AI Measurement Framework).
  • Kersten, M. (2025). Output to Outcome: An Operating Model for the Age of AI (forthcoming). Presentation at the 2025 Engineering Leadership Tech Summit.
  • Google Cloud & DORA (DevOps Research and Assessment). (2025). 2025 State of AI-Assisted Software Development Report. Retrieved from https://cloud.google.com/devops/state-of-devops

Further Reading

For readers interested in exploring AI ideas further, here are a few related pieces from my earlier writing:

  • AI in Software Delivery: Targeting the System, Not Just the Code
  • AI Is Improving Software Engineering. But It’s Only One Piece of the System
  • Leading Through the AI Hype in R&D
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact