• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Purpose and Mission
  • Collaboration
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact

Leadership

Agile Isn’t Dead and AI Isn’t Killing It Either

January 24, 2026 by philc

AI Is Rebundling Roles, Shrinking Some Teams, and Raising the Bar for Responsible Delivery

9 min read

My first article for 2026. I’ve been back in my software roots, weeks of looping with Geoffrey Huntely’s Ralph Wiggum, visiting Steve Yegee’s Gastown, and swapping my earlier AI requirements and repo/tasking workflows for tighter, spec-first execution: GSD (Getting Sh*t Done repo) style, with planning modes that actually keep pace.

As much fun as I have been having implementing code, this article is about leadership and software delivery, not a new AI tool. It was sparked by a headline I’ve seen so many times I can almost predict it: “moving away from Agile,” “Agile is obsolete,” “Agile is dead.”

This time it was a YouTube title from a major consulting firm: “Moving away from Agile: What’s Next” (McKinsey). I wasn’t surprised, consulting narratives have a way of “ending” whatever you’re doing to make room for the next wave of services. I’m not here to debate the video. I’m here to challenge the pattern behind that headline, because it keeps coming back, and now it’s being repackaged as an AI-era conclusion.

I keep seeing “Agile is dead” headlines, now repackaged for the AI era. My take: AI isn’t killing Agile. AI is illuminating constraints that were already in the value stream.

If coding gets faster and lead time doesn’t improve, the bottleneck was never engineering output. It was prioritization, dependencies, validation, operability, and decision latency.

That’s the problem with the “Agile is dead” narrative: it confuses a delivery wrapper with a business capability.

Agility is not a sprint calendar, a Jira workflow, or a job title. Agility is a capability: the organizational skill to sense change, make decisions, and deliver value quickly enough to learn and adapt before the market moves again. Put prototypes in customers’ hands sooner. Shorten the time between “we think” and “we know.” Reduce the cost of being wrong. That capability is a competitive requirement in modern software businesses, not a trend we can retire.

In the post, I outline the load-bearing responsibilities that never go away, why roles will rebundle as teams shrink, and why Value Stream Management (VSM) and flow metrics matter more as AI increases delivery capacity. I’m genuinely curious about where others see the constraint shift as AI adoption grows.

In November, I wrote When AI Isn’t Enough to make a simple point: AI accelerates output, but it doesn’t replace fundamentals, judgment, or accountability. This article is a follow-up to that argument, focusing on the delivery operating model. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

What is changing, and changing fast, is how teams cover the work. AI compresses execution time, reshapes roles, and makes low-value ceremony impossible to defend. But it does not delete the responsibilities required to deliver software safely in the real world.

So no, Agile isn’t dead. Maybe what’s dying is Agile theater.

The debate is mislabeled

When people say “Agile is dead,” they’re often reacting to dysfunction that deserves to die:

  • Standups that are status meetings
  • Backlogs that are graveyards
  • Sprint plans exist to create the appearance of control
  • Story points treated like productivity
  • “Agile transformations” where leadership behavior never changed

If that’s your lived experience, the conclusion feels tempting: the system is heavy and slow, and AI just made the contrast painful.

But that doesn’t mean agility is obsolete. It means your organization was using a process to simulate control.

Agility was never about the ceremonies. Agility is the ability to learn fast under uncertainty. Scrum and Kanban are just different ways to manage that learning loop. AI doesn’t remove the need to steer. It raises the stakes on steering because the engine just got bigger.

Team size can shrink. Responsibility surface area does not

AI is making smaller, stream-aligned teams more feasible in some contexts. You can feel it in the language: “builders” is becoming a popular label precisely because it implies broader ownership, people who can take an idea and move it forward end-to-end with help from tools and agents.

But here’s the part leaders keep getting wrong:
Shrinking the team does not shrink the work that must be covered to deliver and operate software responsibly.

Roles can be consolidated. Responsibilities do not disappear.

AI is collapsing traditional role boundaries within cross-functional teams.

Product managers can now use AI to do meaningful slices of work that used to require separate specialists: synthesize customer feedback at scale, interrogate trends in quantitative data, draft PRDs and acceptance criteria, and produce “good enough” prototypes that accelerate discovery and alignment.

On the delivery side, engineers are increasingly being pulled both upstream and downstream, tightening requirements, exposing edge cases, and improving the spec-to-task chain, while also generating test ideas, acceptance scenarios, and risk-based coverage faster.

This compresses cycle time and rebundles work into fewer hands, but the obligations don’t change: decisions still need evidence and judgment, and shipped changes still must be secure, validated, and operable. AI accelerates artifact creation, it doesn’t shift accountability when those artifacts are wrong.

If you reduce staffing without deliberately reallocating responsibilities, you don’t get a faster team. You get a fragile one that ships wrong faster.

This is the core misunderstanding in the “AI killed Agile” narrative. AI can take on more of the production of work. drafting, synthesizing, generating, and executing. It cannot take on accountability. And it absolutely cannot eliminate the need for clear ownership of the full delivery lifecycle.

The delivery “load-bearing system” that never goes away

No matter what toolchain you use, AI agents, copilots, code generators, a mature product team still has to cover the same end-to-end responsibility surface area across the value stream. AI can accelerate pieces of it, but it doesn’t delete the categories.

It is like the load-bearing structure of a building. You can renovate the interior all you want, swap tools, shrink teams, rebundle roles, automate entire phases, but you don’t get to remove the beams and call it innovation. If you take out the load-bearing parts, the building might look fine for a moment, right up until you add speed, scale, and real customer demand. Then it fails in expensive, public ways.

AI changes the finishing work and the pace of construction. It doesn’t change what’s structurally required for software delivery to hold up under pressure.

You still need outcome clarity: what problem you’re solving, for whom, and what success means.

You still need discovery and validation: evidence, constraints, and signals that the work is worth doing.

You still need work design: thin slicing, sequencing, WIP discipline, and dependency management.

You still need engineering coherence: architecture, contracts, data correctness, security, and privacy-by-design, tradeoffs that hold under change.

You still need verification and resilience: automated tests, performance and reliability checks, security validation, and confidence in recovery.

You still need delivery and operations: CI/CD, safe rollouts, observability, incident readiness, and cost hygiene.

And you still need learning loops: feedback into priorities, retrospectives with teeth, continuous improvement grounded in bottlenecks rather than opinion.

Call it Scrum, call it Kanban, call it flow. This surface area of responsibility is the reality of software delivery. The framework doesn’t change the reality. It changes how you manage it.

What AI actually changes: distribution, not elimination

AI is changing delivery systems in a few predictable ways.

First, it reduces the cost of producing executable clarity. PRDs, briefs, acceptance criteria, architecture options, test cases, runbooks, and documentation can be drafted quickly. That doesn’t remove the need for these artifacts. It changes who can draft them and how fast teams can iterate.

Second, it makes verification loops cheaper and more continuous. This is where the conversation should be. AI does not make quality automatic. It makes quality automation easier if you design for it. The winning pattern isn’t “AI wrote it, ship it.” The winning pattern is “AI drafted it, then the system verified it repeatedly until it earned release confidence.”

Accountability doesn’t move to the model. It stays with the team.

Third, it moves the bottleneck upstream. When execution gets cheap, delays show up where they have always lived, but were easier to ignore when coding was slow:

  • Unclear priorities
  • Slow decision-making
  • Messy dependency networks
  • Environment and access friction
  • Data quality and migration risk
  • Compliance and governance
  • Weak observability
  • Unclear ownership

AI makes building faster. It makes those problems louder.

So if your end-to-end lead time doesn’t improve after “AI productivity gains,” don’t assume you outgrew Agile. You just discovered that engineering output was never your constraint. Your value stream was.

One-pizza teams still need full coverage, so roles rebundle

This is where AI is forcing the real change, and it’s also where the “Agile is dead” headline is most misleading.

In larger teams, you can afford specialists. In smaller teams, the same responsibilities exist, but fewer people are responsible for them. That forces rebundling, and it demands clearer ownership.

Even the smallest stream-aligned team needs a few capability anchors, whether those anchors are full-time roles or shared hats:

  • An outcome anchor who protects clarity and success measures
  • A technical anchor who owns coherence, integration risk, and tradeoffs
  • A quality anchor who owns the verification strategy and release confidence
  • A flow anchor or delivery manager who owns WIP discipline, bottleneck visibility, and learning loops
  • An operability anchor who owns SLOs, observability, and incident readiness

In a two-pizza team, these might be separate people. In a one-pizza team, one person may cover multiple anchors, with AI agents often taking on more of the drafting, research, and execution within each area. But the anchors still need named ownership. Otherwise, the responsibilities become “everyone’s job,” which quickly turns into “no one’s job.”

This is also where the “builder” identity can go right or wrong. “Builder” can mean end-to-end ownership and tighter loops. Or it can become a euphemism for “we removed roles and hoped the work disappeared.”

With AI, the work doesn’t disappear. It redistributes.

Scrum and Kanban are not obsolete (they are context tools)

A lot of “Agile is dead” takes quietly translate to: “timeboxes feel slow, therefore Scrum is obsolete.” That’s not how mature teams think about frameworks. Mature teams choose mechanisms based on context.

Scrum is useful when a team needs a forcing function for planning cadence, stakeholder inspection points, and a regular rhythm of alignment, especially while decision rights and trust are still maturing.

Flow-based systems become more attractive when deployments are continuous, work items are consistently small, WIP limits are respected, and dependencies are visible and actively managed.

AI nudges many teams toward flow because small batch size and fast verification become even more powerful. But “nudges” is not “replaces.” What AI really kills is ceremony without outcomes.

VSM matters more as AI adoption rises

Here’s the test I keep coming back to: If AI speeds up coding and your end-to-end lead time stays the same, your constraint is not engineering output. Your constraint is the value stream.

That’s why Value Stream Management and product operating models matter more, not less, in an AI-shaped world. You need visibility into where work actually waits. You need clarity on decision rights. You need an operating system that can absorb higher delivery capacity without increasing rework and production risk.

AI is an accelerator. The product operating model is the steering and the guardrails. If steering and guardrails are weak, AI doesn’t create agility, but creates faster confusion.

That’s why, in an AI adoption wave, VSM and the product operating model become non-negotiable: it converts raw delivery capacity into aligned outcomes through visibility, ownership, decision rights, and investment boundaries.

AI can starve your teams upstream if discovery and prioritization can’t keep up.

AI can jam your teams downstream if validation and operational readiness can’t keep up.

When the loops get out of sync, speed doesn’t feel like acceleration. It feels like chaos.

What to keep, what to drop, what to add

If you want a more useful conversation than “after Agile,” try “after Agile theater.” Keep what preserves learning and reduces risk. Drop what exists to create the appearance of control.

Add what makes teams AI agent-ready without surrendering judgment: engineered clarity, continuous verification loops, guardrails by design, and flow metrics that expose constraints across the full value stream, not just inside engineering.

The claim I’ll keep making in 2026

Agile isn’t dead unless someone can point to a genuinely new delivery model that eliminates the core responsibilities of building software fast, safely, and under uncertainty.

AI changes speed and redistributes responsibilities, and who does the work. It does not remove the obligation to make sound decisions, validate rigorously, and operate reliably. Accountability has not changed.

So yes, teams will shrink in some contexts. Roles will rebundle. Titles will evolve. “Builders” will become a common identity. AI agents will take on more implementation, research, and drafting. But the delivery foundation remains.

And if you’re tempted to post “Agile is dead,” I’ll offer a challenge instead: Tell me what will replace an organization’s benefit of becoming more agile? Or which responsibility set disappeared? Or tell me you’re really talking about theater.

Either way, we’ll have a more honest conversation.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Moving away from Agile: What’s Next – Martin Harrysson & Natasha Maniar, McKinsey & Company (YouTube).

Filed Under: Agile, AI, Leadership, Product Delivery, Software Engineering, Value Stream Management

AI Fluent, Fundamentally Lost

December 7, 2025 by philc

The Dual Bar for Hiring in 2026

4 min read

Last week, Gene Kim and Steve Yegge published a piece on vibe coding titled Hiring in the Age of AI: What to Interview For.1 Their central question is one every engineering leader must confront. If AI has reshaped how software is built, how should we evaluate talent today.

They argue that modern interviews must identify candidates who have embraced AI, engineers who can prompt, manage context, and direct tools toward outcomes. I agree. But this view overlaps with a concerning pattern I described in my recent article, When AI Isn’t Enough.2

We are at a crossroads where two truths coexist: AI fluency is no longer optional, but it is not enough to make someone an engineer.

The “AI Crutch” Phenomenon

In recent software engineering interviews, I’ve noticed a recurring pattern. Candidates breeze through screens using AI assistants, producing clean, working code. But the moment the conversation shifts to fundamentals, they collapse.

In one instance, a candidate couldn’t explain why they chose composition over inheritance in the code they had just generated. The code was solid, but the engineer lacked a mental model of why it worked or what would break if the requirements changed.

This was a lack of foundation. AI had become a crutch, allowing them to produce strong output while masking a hollow understanding of the system.

The Great Divergence: Acceleration vs. Noise

A pattern is emerging across the industry. Software engineering is splitting into two groups, and the results are counterintuitive.

Group 1: The Architects. Senior engineers (and those with strong instincts) are achieving massive productivity gains. They can guide AI, spot hallucinations, and explain clean architecture to the tool. For them, AI is an accelerator.

Group 2: The Prompters. Engineers without fundamentals are actually getting slower. They cannot evaluate the AI’s suggestions. When the model drifts, they lack the intuition to course-correct, turning the tool into noise rather than augmentation.

This second group creates a hidden enterprise risk: The Glass Cannon.

They build systems that look impressive and powerful but shatter under the pressure of real-world constraints. The risks are invisible at first, but devastating over time:

  • The Black Box Problem: Because they cannot explain their own output, they treat their code as a third-party library. When it breaks, recovery time skyrockets.
  • Debt at Machine Speed: They may ship features, but they generate technical debt at an accelerated rate. They cannot optimize for cloud costs, architecture, performance, resilience,or spot silent security vulnerabilities because they assume “working” means “correct.”
  • Team Burden: They shift significant pressure onto team or senior engineers who must catch flawed designs, brittle patterns, and AI driven errors during code reviews.

This shifts the cost of software development from creation (which becomes cheap) to maintenance (which becomes prohibitively expensive).

The Dual Bar for Modern Talent

Effective hiring in 2026 requires us to stop picking one lens over the other. We must test for The Dual Bar:

  1. Can the candidate reason through a problem without the aid of AI? (To ensure they aren’t building glass cannons.)
  2. Can they intentionally use AI to accelerate their work? (To ensure they remain competitive.)

We aren’t hiring for what AI might be able to do in 2030. We are hiring for what teams need to ship and maintain now. That requires a new hiring rubric.

A New Hiring Model

To surface the engineers who can think, not just the ones who can prompt, consider your interview process around these five signals:

  • Fundamentals: Test this with at least one session where AI tools are off the table. Focus on fundamentals, design reasoning, and trade-offs, not syntax recall.
  • AI Fluency: Ask them to walk through a recent AI-assisted project. How did they prompt? How did they debug model mistakes? Or have them work through a challenge in real time using AI on a shared screen.
  • Communication: In an AI world, muddled explanations lead to muddled prompts. Can they articulate technical context with precision?
  • Systems Thinking: Present a scenario with competing trade-offs (e.g., latency vs. consistency). See if they can connect decisions to the broader architecture.
  • Curiosity: Ask what they’ve experimented with in the last 90 days. Engineers thriving in this era are climbing the learning curve with intention.

Acceleration vs. Illusion

There is a fine line between acceleration and illusion. If we hire based on the wrong signals, we risk building teams with strong output but weak understanding.

The current generation of great engineers will be those who use AI as a collaborator, not a substitute for thinking. They will use these tools to amplify their strengths rather than hide their gaps.

The question every leader should ask now: Does our interview process surface the engineers who can think, or just the ones who can prompt?

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Kim, G., & Yegge, S. (2025, December 1). Hiring in the Age of AI: What to Interview For. IT Revolution. https://itrevolution.com/articles/hiring-in-the-age-of-ai-what-to-interview-for/
  2. Phil Clark. (2025, November 29). When AI Isn’t Enough. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

Filed Under: AI, Engineering, Leadership, Software Engineering

When AI Isn’t Enough

November 29, 2025 by philc

Why Fundamentals Still Matter in an AI-Shaped Engineering World

6 min read

In the past year, I’ve noticed a shift in how engineering candidates present themselves. A senior director on my team recently began interviewing for a critical backfill.

On paper, the candidates were strong. In the early rounds, several performed exceptionally well, with clean solutions, fast iterations, and confident code. But once the conversation moved from what they could produce with AI to what they understood without it, everything changed.

The same candidates who looked senior-level on a coding screen suddenly struggled with composition, inheritance, architectural tradeoffs, or the purpose behind common design patterns. They weren’t nervous. They didn’t know.

And that’s when a deeper leadership question emerged, one that every software engineering leader I’ve spoken with over the past year is now wrestling with:

What does it mean to be a software engineer when AI can write much of the software?

The Illusion of Mastery

We’ve been pushing AI adoption in our organization since early 2023. Not because it was trendy, but because it was obvious where the future was heading. Over the summer, we doubled down on AI literacy, aiming to have every engineer use these tools comfortably and confidently by year’s end.

The early days were rocky. Engineers said the tools slowed them down. The suggestions lacked context. Resetting instructions became a ritual. Reviews took longer, not shorter, because the generated code wasn’t always correct; it only looked correct. That friction turned out to be a necessary phase.

Once engineers learned how to provide context, prompt effectively, and evaluate output, their productivity didn’t just improve; it multiplied. AI amplifies skill; it does not create it. And that dynamic is now playing out across many hiring pipelines.

Do Fundamentals Still Matter?

A school of thought is gaining momentum in the industry. I’ve heard it from candidates, managers, and even a few senior leaders:

“If you can ask AI the right questions, do you really need to understand the underlying concepts?”

It’s a tempting idea. AI can explain patterns. It can suggest architecture. It can generate code that appears correct and often is.

In specific roles, rapid prototyping, experimentation, and early-stage product exploration may be enough. But anyone who has owned an enterprise system knows the distinction: A proof of concept is not a production system.

In the world of prototypes, speed wins; in the world of enterprise platforms, correctness, reliability, durability, and performance win. The gap between the two is everything.

The New Hiring Reality: AI Is Distorting the Signal

AI has blurred the lines between junior and senior skill, at least at first glance.

Depending on your interview workflow, AI-assisted candidates often perform exceptionally well in early rounds. The solutions come fast. The code reads cleanly. The abstractions look polished. If you’re not paying attention, it’s easy to mistake output for understanding.

But when the conversation shifts to architecture, reasoning, debugging, or explaining why something works, the floor sometimes drops out.

This is not a candidate problem so much as an ecosystem problem. Our traditional hiring processes were not designed for a world where AI can mask gaps in foundational knowledge.

One candidate our director interviewed solved coding problems flawlessly with AI assistance, but could not explain the difference between inheritance and composition. He had mastered the tool, not the craft.

And that raises another concern, one that many CTOs and senior technology leaders now whisper privately: AI is enabling people to appear more capable than they actually are.

AI-Enabled Deception

We’re beginning to see cases where individuals use AI not just to enhance competence, but to manufacture the appearance of it.

Some candidates have used AI to pass interviews, screening rounds, and background checks, only to contribute little or no meaningful work once hired. I know of firsthand examples where someone worked just long enough to collect paychecks before disappearing.

The reality is that, in a screen-shared interview, candidates can quietly lean on second-monitor tools or even AI “whispers.” Everything looks legitimate, yet the candidate may be receiving real-time assistance you cannot detect. Our previous trust assumptions in technical interviews no longer reflect the capabilities of modern tools.

This Is Where Fundamentals Matter Again

Fundamentals matter, not out of nostalgia, but because high-performing systems demand them. Enterprise systems break in ways that require:

  • context
  • judgment
  • intuition
  • analytical reasoning
  • pattern literacy
  • understanding of failure domains
  • the ability to debug what AI got wrong

AI will increasingly diagnose issues before humans get involved. But evaluating whether the fix is correct still requires someone who understands the system beneath the abstraction.

Without fundamentals, engineers become dependent on AI. With fundamentals, engineers become exponentially more effective. That distinction is not negotiable.

Accountability Hasn’t Changed

A subtle misconception is emerging: if AI generated the code, responsibility shifts. It does not. Teams remain fully accountable for every line they push to production, AI-assisted or not. And at least for now and the near future, nothing about AI’s current capabilities changes that.

AI does not dilute ownership. AI does not absorb blame. AI does not change the duty of care.

If an engineer cannot explain the code they are committing, they are not ready to commit it. And if a team cannot reason about how a change behaves under load, in failure, or across distributed components, the team is not ready to own that system.

This isn’t theoretical. AI-generated code is already introducing subtle regressions, brittle logic, and incorrect assumptions. When teams ship code they don’t fully understand, failures become harder to diagnose and recover from.

Ambiguity around ownership is the fastest way to erode reliability.

Fundamentals preserve accountability. They allow engineers to validate, challenge, and harden AI-generated output with the same rigor expected of human-written code. Most importantly, they prevent teams from outsourcing judgment, the one responsibility no tool can assume.

In the current AI era, fundamentals serve as guardrails that keep systems reliable and teams accountable.

Rethinking What We Evaluate

If we expect engineers to use AI, and we should, then interviews must evolve to focus on what AI cannot conceal. These include architectural reasoning, debugging skills, the ability to assess and challenge AI-generated output, design intuition, system-level thinking, and the ability to explain decisions before writing code.

Engineers still need a strong command of foundational concepts that AI frequently mishandles. They must understand how data structures and algorithms affect performance and scalability, and how memory and state behave in real production environments. They should know core software design principles such as encapsulation, composition, immutability, and functional patterns, which guide how systems are structured and maintained.

They also benefit from fluency in common design patterns and the judgment to apply them responsibly. They need a clear grasp of APIs, contracts, and system boundaries, as well as how architectural choices play out in distributed, event-driven, and microservice-based environments. They must be able to reason about concurrency, consistency models, failure scenarios, and performance bottlenecks, areas where AI-generated code frequently introduces subtle bugs.

Finally, they require strong testing, debugging, and diagnostic skills. Engineers must be able to interpret logs, metrics, traces, and behavioral patterns to understand what software is actually doing rather than relying solely on what an AI claims it should do.

For now, these skills are what set high-performing, AI-capable engineers apart.

The Bottom Line

AI is transforming software development at a pace we haven’t seen since the shift from on-prem systems to the cloud. But speed introduces its own risks. Leaders must now answer a question that will define the next decade of engineering:

Do we want teams that generate code with AI, or teams that understand, validate, and elevate what AI produces?

Because in proofs of concept, AI might be enough. In enterprise systems, where durability, reliability, and trust matter, misunderstanding comes at a cost. AI is an extraordinary amplifier. Fundamentals remain the stabilizer.

Engineering organizations that insist on both will build the most resilient and competitive systems in the years ahead.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: AI, Engineering, Leadership, Software Engineering

When the System Fits, the Product Operating Model Works

November 27, 2025 by philc

9 min read

In every conversation about product delivery, team structures, and operating models, one pattern always stands out: there is no single correct structure for a modern software organization.

Leaders make decisions based on their architecture, constraints, history, and the goals they want to achieve. That is why we see so much variation across companies. Some organizations thrive with smaller, long-lived, self-managed cross-functional teams aligned to clear domains. Others depend on larger engineering manager-led groups, shared capability teams, or more centralized arrangements. These differences are not failures. They are the result of leaders shaping systems around their specific context.

My own experience has shown the strength of a particular combination: small, autonomous, cross-functional, long-lived product teams operating within a clear boundary, supported by Team Topologies thinking, Agile practices, DevOps and continuous delivery, Value Stream Management, and the Product Operating Model.

When these elements align with the architecture and constraints of the environment, they create clarity, flow, and accountability. When they do not, the same practices that thrive in one environment can struggle in another. The operating model only performs when the system beneath it supports it.

That is why I appreciated Thorsten Speil’s recent LinkedIn article on the Product Operating Model. He captured many of its strengths and also surfaced the areas where interpretation varies, including team size, organizational implications, discovery practices, and the broader operational impact of shifting to a product-oriented way of working. His post brought these nuances back into focus and highlighted how easily good ideas get misunderstood once they spread across different companies and contexts.

Two themes resurfaced during the discussion. They do not reflect issues with Thorsten’s article, but they are common points of confusion across the industry and worth exploring more deeply.

Misunderstanding 1: Marty Cagan is recommending larger teams

This belief usually comes from surface-level summaries rather than the substance of the work. In his book Transformed, Marty Cagan does not argue that big teams are inherently better. He is arguing against dividing teams into narrow technical slices that leave them unable to deliver value without coordinating across several other groups.

When a team owns only a small fragment of the flow, such as just the UI or database layer, its success depends on the progress of others. Ownership becomes diluted, and dependencies increase.

The real question is not whether a team is “small” or “large.” It is whether the team owns a complete slice of value: a domain or subdomain, or a coherent value stream, that it can deliver with minimal coordination.

In the organizations I’ve worked with, when we refactored monolithic or tangled systems and clarified domain boundaries, teams often became smaller, not larger, but crucially, they became whole and autonomous. What changed was their completeness, not just headcount.

What really determines the right team design is context, the architecture, domain boundaries, cognitive load, subject-matter expertise requirements, and the way work and value flow across the system.

If a subdomain or product in a portfolio is large enough and demands sustained work, a dedicated team may make sense. If several small subdomains or products share architecture or customer value, a single team or squad covering them together can reduce overhead. Team size and structure should align with system boundaries and value streams, not arbitrary org chart conventions.

Misunderstanding 2: The Product Operating Model replaces DevOps

These two ideas are sometimes mentioned together, but they address different layers of the organization.

DevOps improves the path from code to production. It strengthens feedback loops, automation, stability, and the ability to release safely and frequently. The Product Operating Model influences how decisions are made, how work is funded, how discovery and delivery are structured, and how teams are aligned to outcomes. It governs how strategy flows into teams.

One is about delivery performance. The other is about organizational direction. They are not interchangeable, and in a healthy system, they support each other. DevOps allows teams to learn quickly and respond rapidly. The Product Operating Model ensures that this capability is being applied to the right opportunities.

When organizations confuse the two, they end up with teams that can ship quickly but have no clarity on why, or teams that are empowered in theory but constrained by an outdated delivery path.

Where Value Stream Management fits

One of the most overlooked parts of the conversation is the role of Value Stream Management. Many organizations adopt the Product Operating Model with the right intentions, but without visibility into how work actually flows today. Value Stream Management provides that visibility. It shows where work gets stuck, where dependencies cluster, where priorities conflict, and where delays originate. It is the mechanism that connects architecture, team boundaries, and the customer journey into a single picture.

Without this visibility, a product-aligned structure becomes guesswork. Leaders cannot see the real bottlenecks, and teams cannot understand why autonomy feels out of reach. Flow metrics reinforce this visibility by making delays, load, efficiency, and distribution measurable. When VSM, flow metrics, and POM reinforce each other, teams gain stability and clarity. Ownership becomes real rather than symbolic.

The Product Operating Model also changes how work is funded

Another important idea that often gets overlooked is the shift in funding. The Product Operating Model is not simply a structural or cultural change; it changes how work is supported economically.

Instead of funding projects on an annual cycle, organizations fund products and the teams responsible for them. Teams are long-lived rather than assembled and disbanded. Prioritization is continuous rather than fixed once a year.

Outcomes replace scope as the primary measure of progress, and domain expertise becomes a long-term asset. Stable teams and stable funding reinforce each other and create an environment where real ownership and long-term accountability can thrive.

Architecture enables team autonomy

It is common to talk about rapid delivery, continuous discovery, and empowered teams, but none of these is possible unless the architecture supports them.

If components are tightly coupled, if deployments require several approvals, or if core systems or data are shared among many teams, autonomy becomes difficult to implement regardless of intention. Organizational charts cannot compensate for technical constraints.

The most effective team topologies emerge from systems with clear domain boundaries, separation of concerns, modularity, and platform capabilities that reduce cognitive load. When architecture and team design reinforce each other, teams can own outcomes. When they conflict, coordination overhead grows, and autonomy becomes harder to achieve.

Architecture choices shape, but do not dictate, the model

I often advocate for distributed systems and microservices because they reduce dependency load and allow teams to operate with greater independence. But that does not mean these architectures are right for every organization. Modular monoliths, macroservices, domain-oriented monoliths, and hybrid models can all support effective product teams when their boundaries are clear and consistent.

What matters most is that the architecture supports meaningful ownership. I have seen monolithic systems with strong modular structure outperform poorly partitioned microservices because the boundaries were more deliberate.

The Product Operating Model does not require microservices. It requires coherent ownership aligned with the architectural reality.

A monolithic system can still operate effectively under a Product Operating Model when teams have clear ownership boundaries. The fundamental idea behind the Product Operating Model is organizing around outcomes and customer value rather than technical layers.

Teams need responsibility for a meaningful, end-to-end part of the product, not just a narrow slice of the stack. When a monolith is structured with deliberate domain separation and disciplined layers, teams can still take ownership of specific product areas or value streams and make decisions within those boundaries.

At the same time, monolithic systems often introduce more coordination requirements. Shared code paths, tightly coupled components, and synchronized releases can create friction and increase dependency load. These challenges do not prevent the Product Operating Model from working, but they require more intentional communication, clearer boundaries, and stronger agreements around how teams collaborate inside the monolith.

The architecture does not have to be perfect; it simply needs to support coherent ownership. The clearer the system’s internal structure, the easier it is for teams to operate end to end without excessive coordination.

This is why context matters. The Product Operating Model succeeds when the system enables teams to own outcomes, regardless of whether the underlying architecture is a monolith, a modular monolith, or a distributed set of services.

Why context matters

Organizations often begin by asking whether they should adopt the Product Operating Model. A better question is what their current system allows and where the real constraints are.

You can adopt a Product Operating Model in a monolithic architecture, and many companies do. What matters most is whether teams can own meaningful areas of the product, make decisions with limited friction, and deliver improvements without excessive dependencies. Some monoliths support this quite well, particularly when structured with clear domain boundaries. Others are so tightly coupled that autonomy is difficult until parts of the system are modernized.

The model itself is rarely the constraint. The system and its boundaries are. Most failed transformations happen not because the Product Operating Model is flawed, but because leaders apply it without understanding the environment that must support it.

The real work is creating the conditions for POM to succeed

Organizations that succeed with the Product Operating Model share several characteristics. Their architecture supports autonomy. Their value streams are visible. Flow metrics guide decisions. Team structures match real domain boundaries. DevOps practices are mature enough to support rapid learning and delivery. And product, design, and engineering operate together as one system.

In these environments, the Product Operating Model does not feel like a framework. It is the natural way the organization should operate. It aligns people, technology, and strategy into a coherent system and gives teams the conditions they need to take real ownership.

What Really Determines Whether POM Succeeds

Most debate about the Product Operating Model focuses on whether it is the right model. That is not the most helpful place to begin. The more important question is whether the system can support long-term product ownership and sustained team autonomy.

The Product Operating Model is not only a team structure. It is a commitment to funding products rather than projects, supporting teams for the lifespan of the product, building and retaining domain expertise, prioritizing work continuously instead of annually, and evaluating progress through outcomes rather than activity. When these elements are combined with modern architecture, visibility into flow, and strong DevOps practices, the Product Operating Model becomes a practical and natural way to operate. Teams can own their work end-to-end and connect what they build to real customer value.

When organizations attempt to adopt the model without making these underlying adjustments, POM struggles. Team boundaries feel artificial, ownership breaks down, and delivery becomes a ceremony rather than a learning experience.

The more productive question is not whether to adopt the Product Operating Model, but rather how to do so. The practical question is what needs to change in the architecture, the flow of work, the funding model, and the team design so that a product-oriented way of working can thrive in this environment.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References and Further Reading

This article draws on ideas and practices that have shaped modern product development, organizational design, and software delivery. For readers who want to explore the concepts more deeply, the following works provide useful context.

Thorsten Speil – “You need to move to the Product Operating Model! … Really?” (2025), https://www.linkedin.com/pulse/you-need-move-product-operating-model-really-whats-thorsten-speil-2mhcf/
The original post that inspired this article and sparked a thoughtful discussion on how organizations interpret and apply POM principles in different contexts.

Marty Cagan – Transformed (2024)
Clear articulation of the Product Operating Model and the organizational conditions needed to support empowered product teams.

Matthew Skelton and Manuel Pais – Team Topologies
Guidance on service-aligned team structures, interaction modes, cognitive load, and organizational boundaries that support flow.

Value Stream Management Consortium – Project to Product Reports (2023–2024)
Industry research on flow metrics, product funding, and how organizations connect technology investments to actual business outcomes.

Dr. Nicole Forsgren, Jez Humble, and Gene Kim – Accelerate
Evidence-based insights into DevOps, continuous delivery, feedback loops, and the capabilities of high-performing engineering organizations.

Steve Pereira and Andrew Davis – Flow Engineering
Practical mapping techniques for visualizing system constraints, dependencies, and opportunities to improve value flow.

Eric Evans – Domain-Driven Design
Architectural foundations for creating clear domain boundaries that support coherent ownership in product-aligned teams.

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

Why Value Stream Management and the Product Operating Model Matter (and What Comes Next)

November 5, 2025 by philc

6 min read

I had the opportunity to revisit my January article and refine its key points for a recent Flowtopia.io post.

Seeing the Why Behind the Frameworks

In 2021, as part of our evolving Agile transformation, I introduced Value Stream Management (VSM) and later championed the Product Operating Model (POM). Yet I never clearly articulated why these practices mattered.

Looking back, we had already been moving toward a product-oriented model long before naming it. Cross-functional product teams operated organically but without shared governance. When capacity pressures mounted, priorities blurred and inefficiencies surfaced, showing that alignment and communication of purpose are as essential as the frameworks themselves.

Inside my own organization, alignment lagged. Technology advanced rapidly, and engineers and Agile Leaders embraced flow metrics and value-stream thinking, while the product function remained loosely engaged. Without clear accountability, the message fractured: technology optimized for flow; product managed for capacity. The gap limited our ability to realize the frameworks’ potential.

This imbalance is common. Most organizations face more work than they have capacity for, making prioritization and a focus on outcomes essential. VSM and the Product Operating Model address this directly, aligning teams, optimizing workflows, and ensuring that every hour of capacity contributes to real value.

“Adopting frameworks isn’t enough; leaders must over communicate their purpose.”

The Turning Point: When Efficiency Isn’t Enough

Every transformation reaches a moment of truth. You automate more, deploy faster, and report higher output, yet business leaders still ask, “How are our investments being utilized?”

The disconnect isn’t about effort or talent, but about visibility. Most digital organizations struggle to clearly understand how knowledge work flows or how investments in Scrum, Kanban, DevOps, automation, and now AI impact performance. Teams, in turn, can’t see how their daily work ties to customer or business outcomes.

That’s where VSM and POM intersect, two complementary frameworks that connect flow, alignment, and outcomes. Both emerged from the same realization: efficiency alone is insufficient. Without linking how value flows to what outcomes it creates, organizations risk optimizing for motion instead of progress. Sustaining expertise and funding across a product’s lifespan, rather than through short-term projects, produces better results.

From Projects to Products

For decades, technology operated as a cost center measured by utilization and velocity. Projects were funded, staffed, delivered, and dissolved. The product model reversed that logic.

By aligning long-lived teams around customer and business outcomes, organizations create real ownership and continuity. Teams become responsible not just for delivery, quality, and security, but for the outcomes they produce.

Economic accountability strengthens this model. In a product-funded operating structure, long-lived teams contribute to sales and growth, but they also influence the margins those products generate. That requires understanding more than top-line revenue. Teams should know their cost of goods sold (COGS): the direct costs, licenses, labor, implementation effort, and other team expenses that determine the actual cost of delivering and supporting the product.

When teams are evaluated on margin contribution rather than throughput or feature count, the dynamic changes. Ownership deepens. The definition of value expands. Financial discipline becomes part of everyday decision-making.

This also creates new complexity. Accountability and funding are no longer as simple as “get the code out.” They become “deliver a product customers will buy, at a margin the business can sustain.” For many organizations, this is far harder than shipping features, especially when teams are short-lived, responsibilities overlap, or cost allocations remain unclear.

But this discipline is one of the most powerful levers for turning the Product Operating Model from a framework built for speed into one built for sustainable value. It does not push teams back into a cost-center posture. Instead, it gives them the visibility to understand how Flow, outcomes, and customer success connect directly to profitability.

In our case, context switching dropped. Developers embedded in single domains became accountable for both flow and customer outcomes. Priorities shifted faster, decisions stayed within teams, and purpose became clearer. When people see how their work creates value, metrics stop being abstract and become insights for improvements; they start to matter.

Context Is Everything

“There is no one-size-fits-all approach to transformation. The true power of frameworks like VSM and POM lies in their flexibility to serve as blueprints rather than rigid rules.”

Adoption succeeds only when frameworks align with an organization’s structure, culture, and leadership context. Models fail not by design but by misapplication. That’s why effective organizations start by seeing their system before changing it.

Value Stream Mapping provides visibility, showing how work moves, where it slows, and how efficiently it reaches customers. Flow Engineering practices, such as Outcome Maps, Current-State Maps, and Dependency Maps, enable leaders to visualize how work, teams, and dependencies interact. These visualizations reveal friction, conflicting priorities, and hidden handoffs that delay the realization of value.

“Visibility creates alignment. Alignment establishes the foundation for improvement.”

The 2024 Project to Product State of the Industry Report confirms that elite organizations don’t just implement frameworks; they adapt them to fit their structure and customer context. That adaptability turns adoption into transformation.

Flow and Realization: The Two Sides of Value

Every delivery system operates in two dimensions:

Flow – how efficiently value moves.

Realization – how effectively that value produces business or customer outcomes.

Most organizations measure one and overlook the other or treat them as separate conversations.

Flow metrics, including Flow Time, Velocity, Efficiency, Distribution, and DORA metrics, reveal system health but not its impact.

Realization metrics, retention, revenue contribution, and time-to-market, show outcomes but not efficiency.

“Flow transforms effort into movement; realization transforms movement into impact.”

The 2024 Project to Product Report found that fewer than 15% of Organizations integrate flow metrics with business outcomes. Yet those that do so outperform their peers on both speed and customer satisfaction.

Measuring Across Layers

Metrics operate across three layers:

• System Layer: Flow & DORA metrics reveal delivery efficiency.

• Team Layer: Developer Experience (DX) and sentiment show team health.

• Business Layer: Realization metrics link work to outcomes.

Connecting these layers turns measurement into meaning and prevents metric theater, reporting what’s easy instead of what matters.

Leadership and Structure: The Missing Link

Even the best frameworks fail without a shift in leadership. Adopting VSM and POM means transitioning from a command-and-control approach to one of clarity, from managing tasks to managing systems.

Delegation and empowerment become strategic levers. Leaders define and communicate outcomes and boundaries; teams own delivery, quality, and learning within them. Guided by data-driven feedback, they experiment and improve.

The best teams treat flow and realization as continuous feedback loops, a living system that evolves with every release.

Governance through transparency replaces micromanagement. Dashboards enable leaders to coach, rather than control, by focusing on flow, bottlenecks, and opportunities. Empowerment is a shared ownership of outcomes.

A mature value-stream culture recognizes that leadership doesn’t disappear, but evolves. The leader’s job is to design the system where great work happens, not be the system itself.

What Comes Next: Amplification Through AI

Organizations often ask, “What’s next?”

The answer is amplification, using technology, data, and AI to accelerate insight and learning.

AI doesn’t change your system; it magnifies it. If your processes are slow, AI exposes that faster. If your system is healthy, it enhances visibility, identifies bottlenecks, and predicts where investment yields the highest return.

The future of AI in VSM is about augmenting human judgment, not replacing it. Intelligent automation links flow metrics to outcomes, detects deviations early, and surfaces recommendations that leaders can act on in real-time. This evolution expands the leader’s role once again, from observer to orchestrator of improvement.

Bridging Technology and Business Value

My ongoing focus is strengthening the connection between technology execution and business outcomes, a lesson shaped by feedback from an executive 360-degree assessment: “You should focus more on business results as a technology leader.”

That insight was right. We transformed from a monolithic architecture and waterfall process into a world-class Agile, microservices-based organization, yet we hadn’t consistently shown how that transformation delivered measurable business results.

To close that gap, we’re developing tools that make value visible:

• Value Stream Templates to connect work with business objectives.

• Initiative & Epic Definitions emphasizing outcomes and dependencies.

• Team-Level OKRs tied to measurable business priorities.

• Knowledge Hub Updates highlighting outcomes over outputs.

The 2024 Project to Product Report found that organizations that consistently link delivery, metrics, and business outcomes outperform their peers in terms of agility, profitability, and retention.

“The answers reveal whether your organization is optimizing activity or enabling value.”

The Real Transformation

When combined, VSM and POM unlock a higher level of capability. They teach leaders to see how work flows, how people collaborate, and how outcomes drive real impact.

When you see work as a flow of value rather than a measure of effort, you stop managing activity and start leading outcomes.

That’s the actual transformation, shifting focus from what we deliver to what difference it makes.

“The time to act is now. Let’s lead purposefully, ensuring our teams deliver meaningful, measurable value in 2026 and beyond.”

Transformation is never solitary; shared understanding across our industry is where alignment begins.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. The 2024 Project to Product State of the Industry Report, Planview, https://info.planview.com/project-to-product-state-of-the-industry-_report_vsm_en_reg.html
  2. Why Value Stream Management and the Product Operating Model Matter, Rethink Your Understanding, https://rethinkyourunderstanding.com/2025/01/why-vsm-and-the-product-operating-model-matter/

Filed Under: Agile, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

The Price of Alignment

October 21, 2025 by philc

How Well-Intentioned Integration Can Undermine Modern Architecture and Team Autonomy

11 min read

Having been part of numerous acquisitions and technical due diligence efforts throughout my career, I have observed familiar patterns in mergers. Leadership evaluates architecture, the technology stack, team skills, and organizational structure to assess fit and value. I have seen how integration decisions can reinforce or erode what made an acquisition valuable.

In one recent integration, however, the process risked undoing the very capabilities that had made the acquired organization successful. Its architecture and team design, once aligned for speed and autonomy, were reshaped in ways that constrained both.

A large, established technology company, which I will call LegacyTech, acquired a smaller, fast-moving company, AgileWorks. LegacyTech’s architecture is still largely monolithic, shaped by years of prioritizing control, predictability, and reliability. Its teams are large, its operating rhythm consistent, and its management approach straightforward: one engineering manager for about eight engineers, one product manager per team, and a typical structure applied across the organization. Part of this structure reflected LegacyTech’s leadership instincts, and the other part reflected the very real need to standardize roles, expectations, and career frameworks across the combined company.

AgileWorks had spent nearly a decade transforming from a monolithic system into a microservices-based organization. Its teams were small, long-lived, cross-functional, and aligned to clearly defined domains and subdomains. Each team owned its services end to end, including logic, data, deployments, and the flow of value. They operated with local decision-making, shipped independently, and continuously improved without waiting for outside coordination.

At AgileWorks, engineering managers focused on people and career development rather than directing day-to-day delivery. Delivery teams included dedicated Agile leaders, product partners, technical leads, user experience, QA, and the skills required to deliver independently.

By the time of the acquisition, AgileWorks had become the kind of organization many aspire to be: fast, autonomous, and adaptable.

The Integration Challenge

After the acquisition, LegacyTech required all teams, both legacy and newly acquired, to adopt the same structure, reporting model, and span-of-control expectations. AgileWorks’ small, domain-aligned teams were reshaped to match LegacyTech’s organization. What seemed rational and efficient on paper quickly began influencing how work flowed in practice. Structure does not simply describe how people work. It defines how they work.

When Structure Shapes the System

Conway’s Law teaches that organizations design systems that mirror their communication structures.

At AgileWorks, small teams designed, built, and deployed small services. At LegacyTech, large teams built large systems. Both structures made sense in their respective architectures.

When the two companies merged, the system began to change, not because the code changed, but because the structure changed. Teams that once released independently now had to coordinate across domains. Engineering managers balanced conflicting priorities across multiple queues. Flow slowed. The architecture itself risked drifting toward a more coupled and synchronized model.

This was not personal or political. It was mechanics.

Understanding the Architectural Misalignment

LegacyTech was not wrong. It was simply optimized for a different world.

Its monolithic architecture enabled centralized decision-making, broad responsibilities, and larger teams. In that environment, consistency and uniform structures work well.

AgileWorks’ architecture required something different. It operated within a distributed microservice architecture with bounded contexts. Each service independently owns its logic, data, and deployment lifecycle. Because the architecture was modular, the teams were modular as well. Small teams were not a stylistic preference. They were a structural capability.

Seen through LegacyTech’s perspective, AgileWorks’ structure looked unfamiliar: more teams, fewer people per team, independent decisions, separate flows. Without curiosity about architectural context, autonomy can look like fragmentation.

Why the “Too Many SKUs” Question Misses the Point

As integration continued, a recurring question arose. “Why does AgileWorks have so many SKUs?”

To LegacyTech, the number seemed excessive. An SKU is simply an internal identifier for a product, but in AgileWorks’ system, each SKU represented a bounded context, a domain or subdomain with its own architecture, team, and flow of value.

This is the natural outcome of microservices. As Martin Fowler and Dave Farley describe them, microservices are small, autonomous, independently deployable, and aligned to a single, well-defined domain boundary. That was exactly how AgileWorks structured its system. Each SKU marked a clean separation of concerns, not a proliferation of products.

Microservices reduce dependency drag by allowing teams to work in parallel, which DORA research consistently shows is a predictor of higher performance. What LegacyTech viewed as unnecessary complexity was in fact a sign of architectural maturity.

The Hidden Cost of Consistency

Consistency creates clarity and predictability. It is appealing, especially in large organizations. But when consistency is applied without understanding architectural intent, it can silently erode the strengths the acquisition sought to preserve.

Reassigning domain-aligned teams into broader groups collapses boundaries that AgileWorks had intentionally kept separate. From the outside, the structure appears aligned. On the inside, queues form, decisions slow, and delivery suffers.

What appeared efficient became regression.

Leadership Context and Legacy Mindsets

LegacyTech’s leaders emphasized consistency across the combined organization. One executive summarized it plainly: “We cannot redesign all of our teams to match the company we acquired, so we will redesign theirs to match ours.” From their perspective, this was a practical decision.

With far more teams than AgileWorks, standardizing the smaller footprint seemed simpler and more efficient. AgileWorks’ leadership understood this dynamic well; it was the same approach they had used when integrating engineering teams during their own previous acquisitions.

LegacyTech’s desire for consistency created immediate pressure to reorganize AgileWorks’ teams. To meet these expectations while minimizing disruption, the AgileWorks department head adopted a phased hybrid approach.

Instead of dismantling domain boundaries outright, he grouped related subdomain teams under individual engineering managers until each manager had an average of eight software engineers in their hierarchy. This met LegacyTech’s span-of-control rules while protecting delivery continuity for committed roadmap work. Leadership and HR at LegacyTech formally approved the plan.

Although the plan satisfied the stated requirements, it did not align with how LegacyTech leaders mentally modeled team structure. They were accustomed to a one-engineering-manager-to-one-team pattern tied to a single codebase. Seeing multiple small subdomain teams grouped under a single engineering manager did not fit that worldview.

Despite backchannel conversations, no one approached the AgileWorks leader directly to understand why the hybrid structure existed or what it was designed to protect. Over time, this misunderstanding worked against him.

The original team design misunderstanding also set the stage for deeper structural changes. To eliminate ambiguity and fully align the organizations, AgileWorks was required to adopt LegacyTech’s management model. This meant removing the Agile Leader role from each team and shifting delivery responsibility directly to engineering managers.

The shift was more than procedural. It fundamentally redefined the role of the engineering manager at AgileWorks.

Engineering managers, who had previously focused on people development and coaching, were now accountable for day-to-day delivery, performance, coordination, and practice consistency. Their span of control increased from five to eight, requiring them to support two or sometimes three small subdomain teams, since most AgileWorks teams averaged three developers.

What had once been a role centered on enabling people quickly became one centered on directing delivery. The cognitive load increased, role boundaries blurred, and the structure that once allowed AgileWorks to move rapidly and independently became increasingly challenging to maintain.

What had once enabled flow and autonomy at the subdomain-scoped team level now introduced friction and confusion. These role changes did not remain at the organizational layer; they risked influencing how the architecture behaved.

This tension over roles and structure surfaced again in how leaders interpreted AgileWorks’ team sizes and scopes.

Team Design, Domain Thinking, and the Case for Larger Scopes

Misunderstanding also surfaced around team size and scope. AgileWorks had several small teams working within the same domain, which some LegacyTech leaders interpreted as fragmentation. The issue was not fragmentation. It was a missed opportunity to ask why the structure existed in the first place.

In Transformed, Marty Cagan describes a common pitfall in product transformations. Organizations create too many narrow teams that each own a thin slice of the product. These slices are too small to deliver real outcomes independently. Handoffs increase, dependencies grow, and accountability becomes unclear.

Cagan recommends not necessarily building bigger teams. It is to give small teams a larger scope. Increase what a team owns, not the number of people on it.

AgileWorks followed this principle. Drawing from Domain-Driven Design, Separation of Concerns, microservices, and distributed systems patterns:

  • Domains aligned to product portfolios
  • Subdomains aligned to individual products or major capabilities
  • Each team owned a full subdomain end-to-end

This structure gave every team deep domain expertise, architectural control, independent deployment capability, and clear ownership of outcomes. Small teams did not mean fragmented teams. They owned coherent, customer-facing capabilities aligned with product portfolios.

LegacyTech’s model relied on broader functional groupings within a monolithic system. Engineers were often reassigned to different teams based on capacity needs. That model works in monoliths but does not map cleanly to distributed systems where autonomy and boundary clarity matter.

Curiosity would have bridged this gap. A simple question about how AgileWorks’ teams aligned to product portfolios and subdomains might have made the structure clear and its purpose obvious.

When Experience Replaces Curiosity

A moment shared with me later captured the tension clearly. A long-tenured manager walked a LegacyTech senior leader through AgileWorks’ architecture and team structure. The leader responded, “I have been doing this for thirty years, and my playbook works. I do not understand how your organization works, nor do I care to at this point.”

This approach is a familiar leadership pattern. Playbooks shaped by years of success do not always map to new architectural contexts. One size does not fit all. Playbooks can be adjusted to match context and practices. Experience is valuable, but experience without curiosity becomes limiting. And in distributed systems, where autonomy and domain clarity matter, it can quietly become destructive.

When Naming Becomes a Surrogate for Understanding

Another unexpected friction point involved team names. Years earlier, AgileWorks allowed teams to name themselves. They chose names like Red, Blue, and Green. These names were cultural, not architectural.

Inside the organization, each team was clearly mapped to its domain, products, and value stream. Ownership was unambiguous.

Yet some LegacyTech leaders found the names confusing. Some made jokes about them. They expected teams to be self-describing, named after products. Ironically, LegacyTech’s argument contradicted itself. It also had several teams name themselves after animals, terms, and cultural references. Labels alone did not reflect product alignment on either side.

Asking a simple question, “How do these team names map to your products and domains,” would have resolved everything.

The Diligence Gap

AgileWorks’ success came not only from its code but from how its teams worked: decoupled, autonomous, and aligned with their architecture.

Ignoring that alignment risks dismantling the system that created the capability value in the first place.

You can acquire the product and organization, but if you do not understand the system that built it, changing one without respecting the other often produces unintended consequences.

When Capacity Pressure Resurrects Old Patterns

Another difference emerged in how each organization responded to capacity pressure. AgileWorks designed its teams to be long-lived. Engineers rarely moved between teams, which allowed domain expertise, trust, and ownership to develop over time.

LegacyTech worked differently. At the end of major delivery cycles, leaders reassigned engineers wherever demand was highest. Teams functioned as resource pools, flexible and frequently reshuffled. Engineers did not always have a choice, and these moves were often framed as career opportunities.

When demand exceeds supply, leaders fall back on the operating model they trust. In larger, monolithic organizations, pooling and trading engineers across teams can work because the teams themselves are large and the domains broad. But in a distributed architecture with small, domain-aligned subdomain teams of three developers, this same practice can have a significant negative impact. Removing a single engineer destabilizes the team’s knowledge base, disrupts flow, and undermines the deep domain context those teams rely on.

When people become fungible, domains may become fungible as well. When domains become fungible, ownership becomes shallow. When ownership becomes shallow, flow slows, defects rise, and quality declines.

LegacyTech was not acting in bad faith. They were relying on a model that had worked in their environment. But AgileWorks required long-lived teams because its architecture depended on them. When capacity was tight, AgileWorks moved teams to the work, not individual team members.

Why Context and Choice Determine Team Design

Modern leadership provides many frameworks to draw from: Cagan, Kersten, Team Topologies, Agile, Lean, DevOps, Value Stream Management, Product Operating Models, Scrum, XP, and Kanban. Each offers value, but none is universally correct. Their effectiveness depends entirely on the context in which they are applied.

Some leaders prefer fewer teams with broader scopes. Others prefer many small, domain aligned teams. Both approaches can succeed. Both can fail. The difference is not the model itself but the architecture, constraints, and business environment it must support.

So the question is never which team model is better. The real question is which structure fits the architecture, flow constraints, and organizational realities of this moment in time.

And even more importantly, do leaders understand why a structure existed in the first place. Most team designs are not arbitrary. They reflect hard-earned lessons about architectural boundaries, flow of work, domain ownership, operational needs, and past failures.

Curiosity is what makes these choices effective. It separates meaningful alignment from surface-level consistency. Without curiosity, even well-meaning integration decisions can erase the very patterns that made a system successful.

Closing Reflection

Through curiosity, I came to understand why LegacyTech made its choices. They were not dismissing AgileWorks’ model. They were responding to their own context, constraints, and operating history. Their decisions made sense inside the environment they knew.

This is the point.

This is not a story about who was right or wrong. It is a story about what happens when architecture and structure drift apart. When a monolithic organization acquires a microservices-driven one, success depends not only on integrating people and tools, but also on integrating understanding.

When you acquire a product, you also acquire the organizational DNA that built it. Structure, practices, team design, flow, and architecture evolve together. Change one without respecting the other, and the system will reshape itself in ways you may not expect.

Alignment is powerful when it is guided by curiosity. Curiosity turns alignment from imposition into learning. And understanding becomes the bridge between two different worlds, trying to operate as one.

Sustainable integration is not about enforcing a single model, but about recognizing the strengths each system brings and understanding how to align them without erasing what makes them effective.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


Influences and Further Reading

Established ideas from modern software architecture, team design, and flow-based organizational practices inform this article. The concepts discussed draw on Domain-Driven Design, microservices and distributed systems principles, Team Topologies, the Flow Framework, DevOps and Lean thinking, and contemporary product operating models.

Notable contributors to these bodies of work include Eric Evans, Martin Fowler, Dave Farley, Manuel Pais, Matthew Skelton, Marty Cagan, and Mik Kersten, as well as research from the DORA community (Accelerate). Their work has shaped much of today’s understanding of how architecture, team structure, and organizational context interact to influence delivery performance and long-term success.

Filed Under: Agile, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 9
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.

  • Home
  • Purpose and Mission
  • Collaboration
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact