• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Purpose and Mission
  • Collaboration
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact

AI

Agile Isn’t Dead and AI Isn’t Killing It Either

January 24, 2026 by philc

AI Is Rebundling Roles, Shrinking Some Teams, and Raising the Bar for Responsible Delivery

9 min read

My first article for 2026. I’ve been back in my software roots, weeks of looping with Geoffrey Huntely’s Ralph Wiggum, visiting Steve Yegee’s Gastown, and swapping my earlier AI requirements and repo/tasking workflows for tighter, spec-first execution: GSD (Getting Sh*t Done repo) style, with planning modes that actually keep pace.

As much fun as I have been having implementing code, this article is about leadership and software delivery, not a new AI tool. It was sparked by a headline I’ve seen so many times I can almost predict it: “moving away from Agile,” “Agile is obsolete,” “Agile is dead.”

This time it was a YouTube title from a major consulting firm: “Moving away from Agile: What’s Next” (McKinsey). I wasn’t surprised, consulting narratives have a way of “ending” whatever you’re doing to make room for the next wave of services. I’m not here to debate the video. I’m here to challenge the pattern behind that headline, because it keeps coming back, and now it’s being repackaged as an AI-era conclusion.

I keep seeing “Agile is dead” headlines, now repackaged for the AI era. My take: AI isn’t killing Agile. AI is illuminating constraints that were already in the value stream.

If coding gets faster and lead time doesn’t improve, the bottleneck was never engineering output. It was prioritization, dependencies, validation, operability, and decision latency.

That’s the problem with the “Agile is dead” narrative: it confuses a delivery wrapper with a business capability.

Agility is not a sprint calendar, a Jira workflow, or a job title. Agility is a capability: the organizational skill to sense change, make decisions, and deliver value quickly enough to learn and adapt before the market moves again. Put prototypes in customers’ hands sooner. Shorten the time between “we think” and “we know.” Reduce the cost of being wrong. That capability is a competitive requirement in modern software businesses, not a trend we can retire.

In the post, I outline the load-bearing responsibilities that never go away, why roles will rebundle as teams shrink, and why Value Stream Management (VSM) and flow metrics matter more as AI increases delivery capacity. I’m genuinely curious about where others see the constraint shift as AI adoption grows.

In November, I wrote When AI Isn’t Enough to make a simple point: AI accelerates output, but it doesn’t replace fundamentals, judgment, or accountability. This article is a follow-up to that argument, focusing on the delivery operating model. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

What is changing, and changing fast, is how teams cover the work. AI compresses execution time, reshapes roles, and makes low-value ceremony impossible to defend. But it does not delete the responsibilities required to deliver software safely in the real world.

So no, Agile isn’t dead. Maybe what’s dying is Agile theater.

The debate is mislabeled

When people say “Agile is dead,” they’re often reacting to dysfunction that deserves to die:

  • Standups that are status meetings
  • Backlogs that are graveyards
  • Sprint plans exist to create the appearance of control
  • Story points treated like productivity
  • “Agile transformations” where leadership behavior never changed

If that’s your lived experience, the conclusion feels tempting: the system is heavy and slow, and AI just made the contrast painful.

But that doesn’t mean agility is obsolete. It means your organization was using a process to simulate control.

Agility was never about the ceremonies. Agility is the ability to learn fast under uncertainty. Scrum and Kanban are just different ways to manage that learning loop. AI doesn’t remove the need to steer. It raises the stakes on steering because the engine just got bigger.

Team size can shrink. Responsibility surface area does not

AI is making smaller, stream-aligned teams more feasible in some contexts. You can feel it in the language: “builders” is becoming a popular label precisely because it implies broader ownership, people who can take an idea and move it forward end-to-end with help from tools and agents.

But here’s the part leaders keep getting wrong:
Shrinking the team does not shrink the work that must be covered to deliver and operate software responsibly.

Roles can be consolidated. Responsibilities do not disappear.

AI is collapsing traditional role boundaries within cross-functional teams.

Product managers can now use AI to do meaningful slices of work that used to require separate specialists: synthesize customer feedback at scale, interrogate trends in quantitative data, draft PRDs and acceptance criteria, and produce “good enough” prototypes that accelerate discovery and alignment.

On the delivery side, engineers are increasingly being pulled both upstream and downstream, tightening requirements, exposing edge cases, and improving the spec-to-task chain, while also generating test ideas, acceptance scenarios, and risk-based coverage faster.

This compresses cycle time and rebundles work into fewer hands, but the obligations don’t change: decisions still need evidence and judgment, and shipped changes still must be secure, validated, and operable. AI accelerates artifact creation, it doesn’t shift accountability when those artifacts are wrong.

If you reduce staffing without deliberately reallocating responsibilities, you don’t get a faster team. You get a fragile one that ships wrong faster.

This is the core misunderstanding in the “AI killed Agile” narrative. AI can take on more of the production of work. drafting, synthesizing, generating, and executing. It cannot take on accountability. And it absolutely cannot eliminate the need for clear ownership of the full delivery lifecycle.

The delivery “load-bearing system” that never goes away

No matter what toolchain you use, AI agents, copilots, code generators, a mature product team still has to cover the same end-to-end responsibility surface area across the value stream. AI can accelerate pieces of it, but it doesn’t delete the categories.

It is like the load-bearing structure of a building. You can renovate the interior all you want, swap tools, shrink teams, rebundle roles, automate entire phases, but you don’t get to remove the beams and call it innovation. If you take out the load-bearing parts, the building might look fine for a moment, right up until you add speed, scale, and real customer demand. Then it fails in expensive, public ways.

AI changes the finishing work and the pace of construction. It doesn’t change what’s structurally required for software delivery to hold up under pressure.

You still need outcome clarity: what problem you’re solving, for whom, and what success means.

You still need discovery and validation: evidence, constraints, and signals that the work is worth doing.

You still need work design: thin slicing, sequencing, WIP discipline, and dependency management.

You still need engineering coherence: architecture, contracts, data correctness, security, and privacy-by-design, tradeoffs that hold under change.

You still need verification and resilience: automated tests, performance and reliability checks, security validation, and confidence in recovery.

You still need delivery and operations: CI/CD, safe rollouts, observability, incident readiness, and cost hygiene.

And you still need learning loops: feedback into priorities, retrospectives with teeth, continuous improvement grounded in bottlenecks rather than opinion.

Call it Scrum, call it Kanban, call it flow. This surface area of responsibility is the reality of software delivery. The framework doesn’t change the reality. It changes how you manage it.

What AI actually changes: distribution, not elimination

AI is changing delivery systems in a few predictable ways.

First, it reduces the cost of producing executable clarity. PRDs, briefs, acceptance criteria, architecture options, test cases, runbooks, and documentation can be drafted quickly. That doesn’t remove the need for these artifacts. It changes who can draft them and how fast teams can iterate.

Second, it makes verification loops cheaper and more continuous. This is where the conversation should be. AI does not make quality automatic. It makes quality automation easier if you design for it. The winning pattern isn’t “AI wrote it, ship it.” The winning pattern is “AI drafted it, then the system verified it repeatedly until it earned release confidence.”

Accountability doesn’t move to the model. It stays with the team.

Third, it moves the bottleneck upstream. When execution gets cheap, delays show up where they have always lived, but were easier to ignore when coding was slow:

  • Unclear priorities
  • Slow decision-making
  • Messy dependency networks
  • Environment and access friction
  • Data quality and migration risk
  • Compliance and governance
  • Weak observability
  • Unclear ownership

AI makes building faster. It makes those problems louder.

So if your end-to-end lead time doesn’t improve after “AI productivity gains,” don’t assume you outgrew Agile. You just discovered that engineering output was never your constraint. Your value stream was.

One-pizza teams still need full coverage, so roles rebundle

This is where AI is forcing the real change, and it’s also where the “Agile is dead” headline is most misleading.

In larger teams, you can afford specialists. In smaller teams, the same responsibilities exist, but fewer people are responsible for them. That forces rebundling, and it demands clearer ownership.

Even the smallest stream-aligned team needs a few capability anchors, whether those anchors are full-time roles or shared hats:

  • An outcome anchor who protects clarity and success measures
  • A technical anchor who owns coherence, integration risk, and tradeoffs
  • A quality anchor who owns the verification strategy and release confidence
  • A flow anchor or delivery manager who owns WIP discipline, bottleneck visibility, and learning loops
  • An operability anchor who owns SLOs, observability, and incident readiness

In a two-pizza team, these might be separate people. In a one-pizza team, one person may cover multiple anchors, with AI agents often taking on more of the drafting, research, and execution within each area. But the anchors still need named ownership. Otherwise, the responsibilities become “everyone’s job,” which quickly turns into “no one’s job.”

This is also where the “builder” identity can go right or wrong. “Builder” can mean end-to-end ownership and tighter loops. Or it can become a euphemism for “we removed roles and hoped the work disappeared.”

With AI, the work doesn’t disappear. It redistributes.

Scrum and Kanban are not obsolete (they are context tools)

A lot of “Agile is dead” takes quietly translate to: “timeboxes feel slow, therefore Scrum is obsolete.” That’s not how mature teams think about frameworks. Mature teams choose mechanisms based on context.

Scrum is useful when a team needs a forcing function for planning cadence, stakeholder inspection points, and a regular rhythm of alignment, especially while decision rights and trust are still maturing.

Flow-based systems become more attractive when deployments are continuous, work items are consistently small, WIP limits are respected, and dependencies are visible and actively managed.

AI nudges many teams toward flow because small batch size and fast verification become even more powerful. But “nudges” is not “replaces.” What AI really kills is ceremony without outcomes.

VSM matters more as AI adoption rises

Here’s the test I keep coming back to: If AI speeds up coding and your end-to-end lead time stays the same, your constraint is not engineering output. Your constraint is the value stream.

That’s why Value Stream Management and product operating models matter more, not less, in an AI-shaped world. You need visibility into where work actually waits. You need clarity on decision rights. You need an operating system that can absorb higher delivery capacity without increasing rework and production risk.

AI is an accelerator. The product operating model is the steering and the guardrails. If steering and guardrails are weak, AI doesn’t create agility, but creates faster confusion.

That’s why, in an AI adoption wave, VSM and the product operating model become non-negotiable: it converts raw delivery capacity into aligned outcomes through visibility, ownership, decision rights, and investment boundaries.

AI can starve your teams upstream if discovery and prioritization can’t keep up.

AI can jam your teams downstream if validation and operational readiness can’t keep up.

When the loops get out of sync, speed doesn’t feel like acceleration. It feels like chaos.

What to keep, what to drop, what to add

If you want a more useful conversation than “after Agile,” try “after Agile theater.” Keep what preserves learning and reduces risk. Drop what exists to create the appearance of control.

Add what makes teams AI agent-ready without surrendering judgment: engineered clarity, continuous verification loops, guardrails by design, and flow metrics that expose constraints across the full value stream, not just inside engineering.

The claim I’ll keep making in 2026

Agile isn’t dead unless someone can point to a genuinely new delivery model that eliminates the core responsibilities of building software fast, safely, and under uncertainty.

AI changes speed and redistributes responsibilities, and who does the work. It does not remove the obligation to make sound decisions, validate rigorously, and operate reliably. Accountability has not changed.

So yes, teams will shrink in some contexts. Roles will rebundle. Titles will evolve. “Builders” will become a common identity. AI agents will take on more implementation, research, and drafting. But the delivery foundation remains.

And if you’re tempted to post “Agile is dead,” I’ll offer a challenge instead: Tell me what will replace an organization’s benefit of becoming more agile? Or which responsibility set disappeared? Or tell me you’re really talking about theater.

Either way, we’ll have a more honest conversation.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Moving away from Agile: What’s Next – Martin Harrysson & Natasha Maniar, McKinsey & Company (YouTube).

Filed Under: Agile, AI, Leadership, Product Delivery, Software Engineering, Value Stream Management

AI Fluent, Fundamentally Lost

December 7, 2025 by philc

The Dual Bar for Hiring in 2026

4 min read

Last week, Gene Kim and Steve Yegge published a piece on vibe coding titled Hiring in the Age of AI: What to Interview For.1 Their central question is one every engineering leader must confront. If AI has reshaped how software is built, how should we evaluate talent today.

They argue that modern interviews must identify candidates who have embraced AI, engineers who can prompt, manage context, and direct tools toward outcomes. I agree. But this view overlaps with a concerning pattern I described in my recent article, When AI Isn’t Enough.2

We are at a crossroads where two truths coexist: AI fluency is no longer optional, but it is not enough to make someone an engineer.

The “AI Crutch” Phenomenon

In recent software engineering interviews, I’ve noticed a recurring pattern. Candidates breeze through screens using AI assistants, producing clean, working code. But the moment the conversation shifts to fundamentals, they collapse.

In one instance, a candidate couldn’t explain why they chose composition over inheritance in the code they had just generated. The code was solid, but the engineer lacked a mental model of why it worked or what would break if the requirements changed.

This was a lack of foundation. AI had become a crutch, allowing them to produce strong output while masking a hollow understanding of the system.

The Great Divergence: Acceleration vs. Noise

A pattern is emerging across the industry. Software engineering is splitting into two groups, and the results are counterintuitive.

Group 1: The Architects. Senior engineers (and those with strong instincts) are achieving massive productivity gains. They can guide AI, spot hallucinations, and explain clean architecture to the tool. For them, AI is an accelerator.

Group 2: The Prompters. Engineers without fundamentals are actually getting slower. They cannot evaluate the AI’s suggestions. When the model drifts, they lack the intuition to course-correct, turning the tool into noise rather than augmentation.

This second group creates a hidden enterprise risk: The Glass Cannon.

They build systems that look impressive and powerful but shatter under the pressure of real-world constraints. The risks are invisible at first, but devastating over time:

  • The Black Box Problem: Because they cannot explain their own output, they treat their code as a third-party library. When it breaks, recovery time skyrockets.
  • Debt at Machine Speed: They may ship features, but they generate technical debt at an accelerated rate. They cannot optimize for cloud costs, architecture, performance, resilience,or spot silent security vulnerabilities because they assume “working” means “correct.”
  • Team Burden: They shift significant pressure onto team or senior engineers who must catch flawed designs, brittle patterns, and AI driven errors during code reviews.

This shifts the cost of software development from creation (which becomes cheap) to maintenance (which becomes prohibitively expensive).

The Dual Bar for Modern Talent

Effective hiring in 2026 requires us to stop picking one lens over the other. We must test for The Dual Bar:

  1. Can the candidate reason through a problem without the aid of AI? (To ensure they aren’t building glass cannons.)
  2. Can they intentionally use AI to accelerate their work? (To ensure they remain competitive.)

We aren’t hiring for what AI might be able to do in 2030. We are hiring for what teams need to ship and maintain now. That requires a new hiring rubric.

A New Hiring Model

To surface the engineers who can think, not just the ones who can prompt, consider your interview process around these five signals:

  • Fundamentals: Test this with at least one session where AI tools are off the table. Focus on fundamentals, design reasoning, and trade-offs, not syntax recall.
  • AI Fluency: Ask them to walk through a recent AI-assisted project. How did they prompt? How did they debug model mistakes? Or have them work through a challenge in real time using AI on a shared screen.
  • Communication: In an AI world, muddled explanations lead to muddled prompts. Can they articulate technical context with precision?
  • Systems Thinking: Present a scenario with competing trade-offs (e.g., latency vs. consistency). See if they can connect decisions to the broader architecture.
  • Curiosity: Ask what they’ve experimented with in the last 90 days. Engineers thriving in this era are climbing the learning curve with intention.

Acceleration vs. Illusion

There is a fine line between acceleration and illusion. If we hire based on the wrong signals, we risk building teams with strong output but weak understanding.

The current generation of great engineers will be those who use AI as a collaborator, not a substitute for thinking. They will use these tools to amplify their strengths rather than hide their gaps.

The question every leader should ask now: Does our interview process surface the engineers who can think, or just the ones who can prompt?

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Kim, G., & Yegge, S. (2025, December 1). Hiring in the Age of AI: What to Interview For. IT Revolution. https://itrevolution.com/articles/hiring-in-the-age-of-ai-what-to-interview-for/
  2. Phil Clark. (2025, November 29). When AI Isn’t Enough. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

Filed Under: AI, Engineering, Leadership, Software Engineering

When AI Isn’t Enough

November 29, 2025 by philc

Why Fundamentals Still Matter in an AI-Shaped Engineering World

6 min read

In the past year, I’ve noticed a shift in how engineering candidates present themselves. A senior director on my team recently began interviewing for a critical backfill.

On paper, the candidates were strong. In the early rounds, several performed exceptionally well, with clean solutions, fast iterations, and confident code. But once the conversation moved from what they could produce with AI to what they understood without it, everything changed.

The same candidates who looked senior-level on a coding screen suddenly struggled with composition, inheritance, architectural tradeoffs, or the purpose behind common design patterns. They weren’t nervous. They didn’t know.

And that’s when a deeper leadership question emerged, one that every software engineering leader I’ve spoken with over the past year is now wrestling with:

What does it mean to be a software engineer when AI can write much of the software?

The Illusion of Mastery

We’ve been pushing AI adoption in our organization since early 2023. Not because it was trendy, but because it was obvious where the future was heading. Over the summer, we doubled down on AI literacy, aiming to have every engineer use these tools comfortably and confidently by year’s end.

The early days were rocky. Engineers said the tools slowed them down. The suggestions lacked context. Resetting instructions became a ritual. Reviews took longer, not shorter, because the generated code wasn’t always correct; it only looked correct. That friction turned out to be a necessary phase.

Once engineers learned how to provide context, prompt effectively, and evaluate output, their productivity didn’t just improve; it multiplied. AI amplifies skill; it does not create it. And that dynamic is now playing out across many hiring pipelines.

Do Fundamentals Still Matter?

A school of thought is gaining momentum in the industry. I’ve heard it from candidates, managers, and even a few senior leaders:

“If you can ask AI the right questions, do you really need to understand the underlying concepts?”

It’s a tempting idea. AI can explain patterns. It can suggest architecture. It can generate code that appears correct and often is.

In specific roles, rapid prototyping, experimentation, and early-stage product exploration may be enough. But anyone who has owned an enterprise system knows the distinction: A proof of concept is not a production system.

In the world of prototypes, speed wins; in the world of enterprise platforms, correctness, reliability, durability, and performance win. The gap between the two is everything.

The New Hiring Reality: AI Is Distorting the Signal

AI has blurred the lines between junior and senior skill, at least at first glance.

Depending on your interview workflow, AI-assisted candidates often perform exceptionally well in early rounds. The solutions come fast. The code reads cleanly. The abstractions look polished. If you’re not paying attention, it’s easy to mistake output for understanding.

But when the conversation shifts to architecture, reasoning, debugging, or explaining why something works, the floor sometimes drops out.

This is not a candidate problem so much as an ecosystem problem. Our traditional hiring processes were not designed for a world where AI can mask gaps in foundational knowledge.

One candidate our director interviewed solved coding problems flawlessly with AI assistance, but could not explain the difference between inheritance and composition. He had mastered the tool, not the craft.

And that raises another concern, one that many CTOs and senior technology leaders now whisper privately: AI is enabling people to appear more capable than they actually are.

AI-Enabled Deception

We’re beginning to see cases where individuals use AI not just to enhance competence, but to manufacture the appearance of it.

Some candidates have used AI to pass interviews, screening rounds, and background checks, only to contribute little or no meaningful work once hired. I know of firsthand examples where someone worked just long enough to collect paychecks before disappearing.

The reality is that, in a screen-shared interview, candidates can quietly lean on second-monitor tools or even AI “whispers.” Everything looks legitimate, yet the candidate may be receiving real-time assistance you cannot detect. Our previous trust assumptions in technical interviews no longer reflect the capabilities of modern tools.

This Is Where Fundamentals Matter Again

Fundamentals matter, not out of nostalgia, but because high-performing systems demand them. Enterprise systems break in ways that require:

  • context
  • judgment
  • intuition
  • analytical reasoning
  • pattern literacy
  • understanding of failure domains
  • the ability to debug what AI got wrong

AI will increasingly diagnose issues before humans get involved. But evaluating whether the fix is correct still requires someone who understands the system beneath the abstraction.

Without fundamentals, engineers become dependent on AI. With fundamentals, engineers become exponentially more effective. That distinction is not negotiable.

Accountability Hasn’t Changed

A subtle misconception is emerging: if AI generated the code, responsibility shifts. It does not. Teams remain fully accountable for every line they push to production, AI-assisted or not. And at least for now and the near future, nothing about AI’s current capabilities changes that.

AI does not dilute ownership. AI does not absorb blame. AI does not change the duty of care.

If an engineer cannot explain the code they are committing, they are not ready to commit it. And if a team cannot reason about how a change behaves under load, in failure, or across distributed components, the team is not ready to own that system.

This isn’t theoretical. AI-generated code is already introducing subtle regressions, brittle logic, and incorrect assumptions. When teams ship code they don’t fully understand, failures become harder to diagnose and recover from.

Ambiguity around ownership is the fastest way to erode reliability.

Fundamentals preserve accountability. They allow engineers to validate, challenge, and harden AI-generated output with the same rigor expected of human-written code. Most importantly, they prevent teams from outsourcing judgment, the one responsibility no tool can assume.

In the current AI era, fundamentals serve as guardrails that keep systems reliable and teams accountable.

Rethinking What We Evaluate

If we expect engineers to use AI, and we should, then interviews must evolve to focus on what AI cannot conceal. These include architectural reasoning, debugging skills, the ability to assess and challenge AI-generated output, design intuition, system-level thinking, and the ability to explain decisions before writing code.

Engineers still need a strong command of foundational concepts that AI frequently mishandles. They must understand how data structures and algorithms affect performance and scalability, and how memory and state behave in real production environments. They should know core software design principles such as encapsulation, composition, immutability, and functional patterns, which guide how systems are structured and maintained.

They also benefit from fluency in common design patterns and the judgment to apply them responsibly. They need a clear grasp of APIs, contracts, and system boundaries, as well as how architectural choices play out in distributed, event-driven, and microservice-based environments. They must be able to reason about concurrency, consistency models, failure scenarios, and performance bottlenecks, areas where AI-generated code frequently introduces subtle bugs.

Finally, they require strong testing, debugging, and diagnostic skills. Engineers must be able to interpret logs, metrics, traces, and behavioral patterns to understand what software is actually doing rather than relying solely on what an AI claims it should do.

For now, these skills are what set high-performing, AI-capable engineers apart.

The Bottom Line

AI is transforming software development at a pace we haven’t seen since the shift from on-prem systems to the cloud. But speed introduces its own risks. Leaders must now answer a question that will define the next decade of engineering:

Do we want teams that generate code with AI, or teams that understand, validate, and elevate what AI produces?

Because in proofs of concept, AI might be enough. In enterprise systems, where durability, reliability, and trust matter, misunderstanding comes at a cost. AI is an extraordinary amplifier. Fundamentals remain the stabilizer.

Engineering organizations that insist on both will build the most resilient and competitive systems in the years ahead.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: AI, Engineering, Leadership, Software Engineering

Beyond the Beyond Delivery: AI Across the Value Stream

October 11, 2025 by philc

A follow up article and reflection on how AI amplifies the systems it enters, and why clarity in measurement and language defines its true impact.

4 min read

After reading Laura Tacho’s latest article, “What the 2025 DORA Report Means for Your AI Strategy,” published today by DX, I found myself nodding along from start to finish. Her analysis reinforces what many of us have been saying for the past year: AI doesn’t automatically improve your system; it amplifies whatever already exists within it.

If your system is healthy, AI accelerates learning, delivery, and improvement. If it’s fragmented or dysfunctional, AI will only expose that reality faster.

In my earlier and related article, “Beyond Delivery: Realizing AI’s Potential Across the Value Stream,” I explored this same theme, referencing Laura’s previous work and the DX Core Four research to show how AI’s true promise emerges when applied across the entire value stream, not just within delivery. Her new reflections build on that conversation beautifully, grounding it in DORA’s 2025 findings and placing even greater emphasis on what truly determines AI success: measurement, monitoring, and system health.

AI’s True Leverage Is in the System

What stands out in both discussions is that AI amplifies the system it enters.

Healthy systems, with strong engineering practices, small-batch work, solid source control, and active observability, see acceleration. Weak systems, where friction and inconsistency already exist, see those problems amplified.

That’s why measurement and feedback are the new leadership disciplines.

Organizations treating AI as a system-level investment, rather than a tool for individual productivity, are seeing the greatest impact. They aren’t asking “how many developers are using Copilot?” but instead “how is AI helping our teams improve outcomes across the value stream?”

DORA’s latest research validates that shift, focusing less on adoption rates and more on outcomes. It echoes a point Laura made and I emphasized in my own writing: AI’s advantage is proportional to the strength of your engineering system.

Why Clarity Still Matters

While I agree with nearly everything in Laura’s article, one nuance deserves attention, not as a critique, but as context.

DORA, DX Core 4, LinearB, and other Software Engineering Intelligence (SEI) platforms are not Value Stream Management (VSM) platforms. It measures the segment of the delivery lifecycle, create and release. However, true VSM spans the entire lifecycle: from idea to delivery and operation.

This distinction matters because where AI is applied should match where your bottlenecks exist.

If your constraint is upstream, in ideation or backlog management, and you only apply AI within development, you’re optimizing a stage that isn’t the problem.

Think of your value stream as four connected tanks of water: ideation, creation, release, and operation.

If the first tank (ideation) is blocked, making the water move faster in the second (creation) doesn’t improve throughput. You’re just circulating water in your own tank while everything above remains stuck.

That’s why AI should be applied where it can improve the overall flow, across the whole system, not just a single stage.

It’s also where clarity of language matters. Some Software Engineering Intelligence (SEI) platforms, including Laura’s organization, integrate DORA metrics within broader insights and occasionally describe their approach as VSM. From a marketing standpoint, that’s understandable; SEI platforms compete with full-scale VSM platforms, such as Planview Viz, which measure the entire value stream. However, it’s worth remembering that DORA and most SEI metrics represent one vital stage, not the entire system.

On Vendors, Neutrality, and Experience

I have deep respect for Laura and her organization’s work advancing how we measure and improve developer experience. Over the last four years, I’ve also established professional relationships with several of these platform providers, offering feedback and leadership perspectives to their teams as they evolve their products and strategies.

I share this because my perspective is grounded in firsthand experience, research, and conversations across the industry, not because of any endorsement. I’m not paid to promote any vendor. Those who know me are aware that I have my preferences, currently Planview Viz for Value Stream Management, as well as LinearB and the DX Core 4 for Software Engineering Intelligence and developer-experience insights.

Each offers unique value, but I’ve yet to see a single platform deliver a truly complete view across all stages, combining full system-level metrics and team sentiment data. Until that happens, I’ll continue to advocate for clarity of terms and how these solutions market themselves, and measurements that accurately reflect reality.

And to be fair, I haven’t kept up with every vendor’s latest releases, so I encourage anyone exploring these tools to do their own research and choose what best fits their organization’s context and maturity.

Closing Thought

Laura’s article is spot-on in identifying what really drives AI impact: monitoring, measuring, and managing the system it touches.

That’s the same theme at the heart of Beyond Delivery: that AI’s potential isn’t realized through automation alone, but through its ability to illuminate flow, reveal friction, and help teams improve faster than before.

When we describe our systems accurately, we focus on what truly matters, and that’s when AI stops being a tool for speed and becomes an accelerant for value across the entire system.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Tacho, Laura. “What the 2025 DORA Report Means for Your AI Strategy.” DX Newsletter, October 8, 2025.
    Available at: https://newsletter.getdx.com/p/2025-dora-report-means-for-your-ai-strategy
  • Clark, Phil. “Beyond Delivery: Realizing AI’s Potential Across the Value Stream.” Rethink Your Understanding, September 2025.
    Available at: https://rethinkyourunderstanding.com/2025/09/beyond-delivery-realizing-ais-potential-across-the-value-stream/
  • DORA Research Team. “2025 State of AI-Assisted Software Development (DORA Report).” Google Cloud / DORA, September 2025.
    Available at: https://cloud.google.com/devops/state-of-devops

Filed Under: Agile, AI, DevOps, Metrics, Product Delivery, Software Engineering, Value Stream Management

From Two Pizzas to One: How AI Reshapes Dev Teams

October 2, 2025 by philc

Exploring how AI could reshape software teams, smaller pods, stronger guardrails, and the balance between autonomy and oversight.

7 min read

For more than two decades, Jeff Bezos’s “two-pizza team” rule has been shorthand for small, effective software teams: a group should be small enough that two pizzas can feed them, typically about 5–10 people. The principle is simple: fewer people means fewer communication lines, less overhead, and faster progress. The math illustrates this well: 10 people create 45 communication channels, while four people create just six. Smaller groups spend less time coordinating, which often leads to faster outcomes.

This article was sparked by a comment at this year’s Enterprise Technology Leadership Summit. A presenter suggested that AI could soon reshape how we think about team size. That got me wondering: what would “one-pizza teams” actually look like if applied to enterprise-grade systems where resilience, compliance, and scalability are non-negotiable?

The Hype: “Do We Even Need Developers?”

In recent months, I’ve heard product leaders speculate that AI might make developers optional. One senior product manager even suggested, half-seriously, that “we may not need developers at all, since AI can write code directly.” On the surface, that sounds bold. But in reality, it reflects limited hands-on experience with the current tools. Generating a demo or prototype with AI is one thing; releasing code into a production system, supporting high-volume, transactional workloads with rollback, observability, and compliance requirements, is another. It’s easy to imagine that AI can replace developers entirely until you’ve lived through the complexity of maintaining enterprise-grade systems.

I’ve also sat in conversations with CTOs and VPs excited about the economics. AI tools, after all, look cheap compared to fully burdened human salaries. On a spreadsheet, reducing teams of 8–12 engineers down to one or two may appear to unlock massive savings. But here again, prototypes aren’t production, and what looks good in theory may not play out in practice.

The Reality Check

The real question isn’t whether AI eliminates developers, it’s how it changes the balance between humans, tools, and team structure. While cost pressures may tempt leaders to shrink teams, the more compelling opportunity may be to accelerate growth and innovation. AI could enable organizations to field more small teams in parallel, modernize multiple subdomains simultaneously, deliver features faster, and pivot quickly to outpace their competitors.

Rather than a story of headcount reduction, one-pizza teams could become a story of capacity expansion, with more teams and a broader scope, all while maintaining the same or slightly fewer people. But this is still, to some extent, a crystal ball exercise. None of us can predict with certainty what teams will look like in three, five, or ten years. What seems possible today is that AI enables smaller pods to take on more responsibility, provided we approach this shift with caution and discipline.

Why AI Might Enable Smaller Teams

AI’s value in this context comes from how it alters the scope of work for each developer.

Hygiene at scale. Practices that teams often defer, such as tests, documentation, release notes, and refactors, can be automated or continuously maintained by AI. Quality could become less negotiable and more baked into the process.

Coordination by contract. AI works best when given context. PR templates, paved roads, and CI/CD guardrails provide part of that. But so do rule files, lightweight markdown contracts such as cursor_rules.md or claude.md that encode expectations for test coverage, security practices, naming conventions, and architecture. These files give AI the boundaries it needs to generate code that aligns with team standards. Over time, this could transform AI from a generic assistant into a domain-aware teammate.

Broader scope. With boilerplate and retrieval handled by AI, a small pod might own more of the vertical stack, from design to deployment, without fragmenting responsibilities across multiple groups.

Reduced overhead. Acting as a shared memory and on-demand research partner, AI can minimize the need for lengthy meetings or additional specialists. Coordination doesn’t disappear, but some of the lower-value overhead could shrink.

From Efficiency to Autonomy

The promise isn’t simply in productivity gains per person; it may lie in autonomy. AI could provide small pods with enough context and tooling to operate independently. This autonomy might enable organizations to spin up more one-pizza teams, each capable of covering a subdomain, reducing technical debt, delivering features, or running experiments. Instead of doing the same work with fewer people, companies might do more work in parallel with the same resources.

How Roles Could Evolve

If smaller teams become the norm, roles may shift rather than disappear.

  • Product Managers could prototype with AI before engineers write code, run quick user tests, and even handle minor fixes.
  • Designers might use AI to generate layouts while focusing more on UX research, customer insights, and accessibility.
  • Engineers may be pushed up the value chain, from writing boilerplate to acting as architects, integrators, and AI orchestrators. This creates a potential career pipeline challenge: if AI handles repetitive tasks, how will junior engineers gain the depth needed to become tomorrow’s architects?
  • QA specialists can transition from manual testing to test strategy, utilizing AI to accelerate execution while directing human effort toward edge cases.
  • New AI-native roles, such as prompt engineers, context engineers, AI QA, or solutions architects, may emerge to make AI trustworthy and enterprise-aligned.

In some cases, the traditional boundaries between product, design, and engineering could blur further into “ProdDev” pods, teams where everyone contributes to both the vision and the execution.

The Enterprise Reality

Startups and greenfield projects may thrive with tiny pods or even solo founders leveraging AI. But in enterprise environments, complexity doesn’t vanish. Legacy systems, compliance, uptime, and production support continue to require human oversight.

One-pizza pods might be possible in select domains, but scaling them down won’t be simple. Where it does happen, success may depend on making two human hats explicit:

  • Tech Lead – guiding design reviews, threat modeling, performance budgets, and validating AI output.
  • Domain Architect – enforcing domain boundaries, compliance, and alignment with golden paths.

Even then, these roles rely on shared scaffolding:

  • Production Engineering / SRE  -managing incidents, SLOs, rollbacks, and noise reduction.
  • Platform Teams – providing paved roads like IaC modules, service templates, observability baselines, and policy-as-code.

The point isn’t that enterprises can instantly shrink to one-pizza teams, but that AI might create the conditions to experiment in specific contexts. Human judgment, architecture, and institutional scaffolding remain essential.

Guardrails and Automation in Practice

For smaller pods to succeed, standards need to be non-negotiable. AI may help enforce them, but humans must guide the judgment.

Dual-gate reviews. AI can run mechanical checks, while humans approve architecture and domain impacts.

Evidence over opinion. PRs should include artifacts, tests, docs, and performance metrics, so reviews are about validating evidence, not debating opinions.

Security by default. Automated scans block unsafe merges.

Rollback first. Automation should default to rollback, with humans approving fixing forward.

Toil quotas. Reducing repetitive ops work quarter by quarter keeps small teams sustainable.

Beyond CI, AI can also shape continuous delivery by optimizing pipelines, enforcing deployment policies, validating changes against staging telemetry, and even self-healing during failures.

What’s Real vs. Wishful Thinking (2025)

AI is helping, but unevenly. Gains emerge when organizations re-architect workflows end-to-end, rather than layering AI on top of existing processes.

Quality and security remain human-critical. Studies suggest a high percentage of AI-generated code carries vulnerabilities. AI may accelerate output, but without human checks, it risks accelerating flaws.

AI can make reviews more efficient by summarizing diffs and flagging issues, but final approval still requires human judgment on architecture and risk.

And production expectations haven’t changed. A 99.99% uptime commitment still allows only 15 minutes of downtime per quarter. Even if AI can help remediate, humans remain accountable for those calls.

Practitioner feedback is also worth noting. In conversations with developers and business users of AI, most of whom are still in their first year of adoption, the consensus is that productivity gains are often inflated. Some tasks are faster with AI, while others require more time to manage context. Most people view AI as a paired teammate, rather than a fully autonomous agent that can build almost everything in one or two shots.

Challenges to Consider

Workforce disruption. If AI handles more routine work, some organizations may feel pressure to reduce the scope of specific roles. Whether that turns into cuts or an opportunity to reskill may depend on leadership choices.

Mentorship and pipeline. Junior engineers once learned by doing the work AI now accelerates. Without intentional design of new learning paths, we may risk a gap in the next generation of senior engineers.

Over-reliance. AI is powerful but not infallible. It can hallucinate, generate insecure code, or miss subtle regressions. Shrinking teams too far might leave too few human eyes on critical paths.

A Practical Checklist

  • Product risk: 99.95%+ SLOs or regulated data? Don’t shrink yet.
  • Pager noise: <10 actionable alerts/week and rollback proven? Consider shrinking.
  • Bus factor: ≥3 engineers can ship/release independently? Consider shrinking.
  • AI Maturity: Are AI Checks and PR Evidence Mandatory? Consider shrinking.
  • Toil trend: Is toil tracked and trending down? Consider shrinking.

Bottom Line

AI may make one-pizza teams possible, but only if automation carries the repetitive workload, humans maintain judgmental oversight, and guardrails ensure standards. Done thoughtfully, smaller pods don’t mean scarcity; they can mean focus.

And when organizations multiply these pods across a portfolio, the outcome might not just be sustaining velocity but accelerating it: more features, faster modernization, shorter feedback loops, and quicker pivots against disruption.

This is the story of AI in team structure, not doing the same with less, but doing more with the same.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

Beyond Delivery: Realizing AI’s Potential Across the Value Stream

September 29, 2025 by philc

Moving beyond AI-assisted delivery to achieve measurable, system-wide impact through value stream visibility and flow metrics.

10 min read

At the 2025 Engineering Leadership Tech Summit, Mik Kersten previewed ideas from his upcoming book, Output to Outcome: An Operating Model for the Age of AI. He reminded us of a truth often overlooked in digital transformation: Agile delivery teams are not the constraint in most cases.

Kersten broke out the software value stream into four phases: Ideate, Create, Release, Operate, and showed how the majority of waste and delay happens outside of coding. One slide in particular resonated with me. Agile teams accounted for just 8% of overall cycle time. The real delays sat at the bookends: 48% in ideation, slowed by funding models, approvals, and reprioritizations; and 44% in release, bogged down by dependencies, technical debt, and manual processes.

This framing raises a critical question: if we only apply AI to coding or delivery automation, are we just accelerating the smallest part of the system while leaving the actual bottlenecks untouched?

AI in the Delivery Stage: Where the Industry Stands

In a recent DX Engineering Enablement podcast, Laura Tacho and her co-hosts discussed the role of AI in enhancing developer productivity. Much of their discussion centered on the Create and Release stages: code review, testing, deployment, and CI/CD automation. Laura made a compelling point about moving beyond “single-player mode”:

“AI is an accelerant best when it’s used at an organizational level, not when we just put a license in the hands of an individual… Platform teams can own a lot of the metaphorical AI headcount and apply it in a horizontal way across the organization.”

Centralizing AI adoption and applying it across delivery produces leverage, rather than leaving individuals to experiment in isolation. But even this framing is still too narrow.

The Missing Piece: AI Adoption Across the Entire Stream

The real opportunity is to treat AI not as a tool for delivery efficiency, but as a partner across the entire value stream. That means embedding AI into every stage and measuring it with system-level visibility, not just delivery dashboards.

This is why I value platforms that integrate tool data across the whole stream, system metrics and visibility dashboards, rather than tools that stop at delivery.

Of course, full-stream visibility platforms are more expensive, and in many organizations, only R&D teams are driving efforts to improve flow. As I’ve argued in past writing on SEI vs. VSM, context matters: sometimes the right starting point is SEI, when delivery is the bottleneck. But when delays span ideation, funding, or release, only a VSM platform can expose and address systemic waste.

AI opportunities across the stream:

  • Ideation (48%) – Accelerate customer research, business case drafting, and approvals; surface queues and wait states in one view.
  • Create (8%) – Apply AI to coding, reviews, and testing, but tie it to system outcomes, not vanity speedups.
  • Release (44%) – Automate compliance, dependency checks, and integration work to reduce handoff delays.
  • Operate – Target AI at KTLO and incident patterns, feeding learnings back into product strategy.

When AI is applied across the whole system (value stream), we can ask a better question: not “How fast can we deploy?” but “How much can we compress idea-to-value?” Moving from 180 days to 90 days or less becomes possible when AI supports marketing, product, design, engineering, release, and support, and when the entire system is measured, not just delivery.

VSM vs. Delivery-Only Tooling

This is where tooling distinctions matter. DX Core 4 and SEI platforms, such as LinearB, focus on delivery (Create and Release), which is valuable but limited to one stage of the system. Planview Viz and other VSM platforms, by contrast, elevate visibility across the entire value stream.

Delivery-only dashboards may show how fast you’re coding or deploying. But Value Stream Management reveals the actual business constraints, often upstream in funding, prioritization, PoCs, and customer research, or downstream in handoffs and release.

Without that lens, AI risks becoming just another tool that speeds up developers without improving the system.

AI as a Force Multiplier in Metrics Platforms

AI embedded directly into metrics platforms can change the game. In a recent Product Thinking podcast, John Cutler observed:

“We talked to a company that’s spending maybe $4 million in staff hours per quarter around just people spending time copying and prepping for all these types of things… All they’re doing is creating a dashboard, pulling together a lot of information, and re-contextualizing it so it looks the same in a meeting. I think that’s just a massive opportunity for AI to be able to help with that kind of stuff.”

This hidden cost of operational overhead is real. Leaders and teams waste countless hours aggregating and reformatting data into slides or dashboards to make it consumable.

Embedding AI into VSM or SEI platforms removes that friction. Instead of duplicating effort, AI can generate dashboards, surface insights, and even facilitate the conversations those dashboards are meant to support.

This is more of a cultural shift than a productivity gain. Less slide-building, more strategy. Less reformatting, more alignment. And metrics conversations that finally scale beyond the few who have time to stitch the story together manually.

The ROI Lens: From Adoption to Efficiency

The ROI of AI adoption is no longer a question of whether to invest; that decision is now a given. As Atlassian’s 2025 AI Collaboration Report shows, daily AI usage has doubled in the past year, and executives overwhelmingly cite efficiency as the top benefit.

The differentiator now is how efficiently you manage AI’s cost, just as the cloud debate shifted from whether to adopt to how well you could optimize spend.

But efficiency cannot be measured by isolated productivity gains. Atlassian found that while many organizations report time savings, only 4% have seen transformational improvements in efficiency, innovation, or work quality.

The companies breaking through embed AI across the system: building connected knowledge bases, enabling AI-powered coordination, and making AI part of every team.

That’s why the ROI lens must be grounded in flow metrics. If AI adoption is working, we should see:

  • Flow time shrink
  • Flow efficiency rises
  • Waste reduction is visible in the stream
  • Flow velocity accelerates (more items delivered at the same or lower cost)
  • Flow distribution rebalance (AI resolving technical debt and reducing escaped defects)
  • Flow load stabilization (AI absorbing repetitive work and signaling overload early)

VSM system-wide platforms make these signals visible, showing whether AI is accelerating the idea-to-value process across the entire stream, not just helping individuals move faster.

Bringing It Full Circle

In recent conversations with a large organization’s CTO, and again with Laura while exploring how DX and Anthropic measure AI, I kept returning to the same point: we already have the metrics to know if AI is making an impact. AI is now just another option or tool in our toolbox, and its effect is reflected in flow metrics, change failure rates, and developer experience feedback.

We are also beginning to adopt DX AI Framework metrics, which are structured around Utilization, Impact, and Cost, aligning with the metrics that companies like Dropbox and Atlassian currently measure. But even as we incorporate these, we continue to lean on system-level flow metrics as the foundation. They are what reveal whether AI adoption is truly improving delivery across the value stream, from ideation to production.

Leadership Lessons from McKinsey and DORA

This perspective also echoes Ruba Borno, VP at AWS, in a recent McKinsey interview on leading through AI disruption. She noted that while AI’s pace of innovation is unprecedented, only 20–30% of proofs of concept reach production. The difference comes from data readiness, security guardrails, leadership-driven change management, and partnerships.

And the proof is tangible: Canva, working with AWS Bedrock, moved from the idea of Canva Code to a launched product in just 12 weeks. That’s precisely the kind of idea-to-operation acceleration we need to measure. It shows that when AI is applied systematically, you don’t just make delivery faster; you also make the entire flow from concept to customer measurably shorter.

The 2025 DORA State of AI-Assisted Software Development Report reinforces this reality. Their cluster analysis revealed that only the top performers, approximately 40% of teams, currently experience AI-enhanced throughput without compromising stability. For the rest, AI often amplifies existing dysfunctions, increasing change failure rates or generating additional waste.

Leadership Implications: What the DORA Findings Mean for You

The 2025 DORA report indicates that only the most mature teams currently benefit from AI-assisted coding. For everyone else, AI mostly amplifies existing problems. What does that mean if you’re leading R&D?

1. Don’t skip adoption, but don’t roll it out unthinkingly.

AI is here to stay, but it’s not a silver bullet. Start small with teams that already have strong engineering practices, and use them to build responsible adoption patterns before scaling.

2. Treat AI as an amplifier of your system.

If your flow is healthy, AI accelerates it. If your flow is dysfunctional, AI makes it worse. Think of it like a turbocharger: great when the engine and brakes are tuned, dangerous when they’re not.

3. Use metrics to know if AI is helping or hurting.

  • Flow time, efficiency, and distribution should improve.
  • DORA’s stability metrics (such as change failure rate) should remain steady or decline.
  • Developer sentiment should show growing confidence, not frustration.

4. Fix bottlenecks in parallel.

AI won’t remove waste; it will expose it faster. Eliminate approval delays, reduce tech debt, and streamline release processes so AI acceleration actually creates value.

5. Value of the message:

The lesson isn’t “don’t adopt AI.” It’s: adopt responsibly, measure outcomes, and strengthen your system so that AI becomes an accelerant, not a liability.

Ruba’s message, reinforced by both McKinsey and DORA, leads to the same conclusion: AI adoption succeeds when it’s measured at the system level, tied to business outcomes, and championed by leadership. Without that visibility, organizations risk accelerating pilots that never translate into value.

Conclusion: Beyond Delivery

The conversation about AI in software delivery is maturing. It’s no longer just about adoption, but about managing costs and system impact. AI must be measured not only by its utilization but also by how it improves flow efficiency, compresses the idea-to-value cycle, and reduces systemic waste.

The organizations that will win in this new era are those that:

  • Embed AI across the entire value stream, not just in delivery.
  • Measure ROI through flow metrics that connect improvements to business outcomes.
  • Manage AI’s cost as carefully as they once managed cloud costs.
  • Lead with visibility, change management, and partnerships to scale adoption.

And critically, successful AI integration requires more than deploying tools. It requires thoughtful measurement, training, and best practices for implementation in software engineering to sustain quality while ensuring that training and strategy are applied consistently across all roles, from product and design to operations and support. Only then can organizations ensure that the promise of acceleration improves outcomes without undermining the collaboration and sustainability that long-term software success depends on.

In short: AI in delivery is helpful, but AI across the value stream is transformational.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  • Atlassian. (2025). How leading companies unlock AI ROI: The AI Collaboration Index. Atlassian Teamwork Lab. Retrieved from https://atlassianblog.wpengine.com/wp-content/uploads/2025/09/atlassian-ai-collaboration-report-2025.pdf
  • Borno, R., & Yee, L. (2025, September). How to lead through the AI disruption. McKinsey & Company, At the Edge Podcast (transcript). Retrieved from https://www.mckinsey.com
  • Cutler, J. (2025, September 23). Product Thinking: Freeing Teams from Operational Overload [Podcast]. Episode 247. Apple Podcasts. https://podcasts.apple.com/us/podcast/product-thinking/id1550800132?i=1000728179156
  • DX, Engineering Enablement Podcast. (2025). Episode excerpt on AI’s role in developer productivity and platform teams. DX. (Quoted in article from Laura Tacho). Episode 90, https://podcasts.apple.com/us/podcast/the-evolving-role-of-devprod-teams-in-the-ai-era/id1619140476?i=1000728563938
  • DX (Developer Experience). (2025). Measuring AI code assistants and agents: The DX AI Measurement Framework™. DX Research, co-authored by Abi Noda and Laura Tacho. Retrieved from https://getdx.com (Image: DX AI Measurement Framework).
  • Kersten, M. (2025). Output to Outcome: An Operating Model for the Age of AI (forthcoming). Presentation at the 2025 Engineering Leadership Tech Summit.
  • Google Cloud & DORA (DevOps Research and Assessment). (2025). 2025 State of AI-Assisted Software Development Report. Retrieved from https://cloud.google.com/devops/state-of-devops

Further Reading

For readers interested in exploring AI ideas further, here are a few related pieces from my earlier writing:

  • AI in Software Delivery: Targeting the System, Not Just the Code
  • AI Is Improving Software Engineering. But It’s Only One Piece of the System
  • Leading Through the AI Hype in R&D
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.

  • Home
  • Purpose and Mission
  • Collaboration
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact