• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • AI
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact

Product Delivery

What Happens When We Eliminate the Agile Leader?

October 9, 2025 by philc

The hidden cost of removing the role that protects flow, team health, and continuous improvement

7 min read

Every few months, the “Agile is Dead” conversation surfaces in leadership meetings, LinkedIn threads, or hallway debates. Recently, I’ve been reflecting on it from two angles:

  • First, I’ve seen organizations under new leadership take very different paths; some thrive with dedicated Scrum Masters or Agile Delivery Manager roles, while others remove them and shift responsibilities to engineering managers and teams.
  • Second, I came across a LinkedIn post describing companies letting go of Scrum Masters and Agile coaches, not for financial reasons, but as a conscious redesign of how they deliver software.

Both perspectives reveal a more profound confusion. Many believe Agile itself is outdated; others assume that if Scrum changes, the role associated with it, the Scrum Master, should disappear too.

But are teams really outgrowing Agile?

Or are we simply misunderstanding the purpose of the Agile leader?

Agile Isn’t Dead, But It’s Often Misapplied

When people say “Agile is dead,” they’re rarely attacking its principles. Delivering in small batches, learning fast, and adapting based on feedback are still how modern teams succeed. What’s fading is the packaged version of Agile, the one sold through mass certifications, rigid frameworks, and transformation playbooks.

Much of the backlash comes from poor implementations. Consulting firms rolled out what they called “textbook Scrum,” blending practices from other frameworks, such as story points and user stories from Extreme Programming (XP), and applying them everywhere. Teams focused on sprints, standups, and rituals instead of learning and improvement.

Scrum was never meant to be rigid; it’s a lightweight framework for managing complexity. When treated as a checklist, it becomes “cargo-cult” Agile, copying rituals without purpose. When that fails, organizations often blame the framework, rather than the implementation.

That misunderstanding extends to the Scrum Master role itself. Many assume that dropping Scrum means dropping the Scrum Master. But the need for someone to coach, facilitate, and sustain continuous improvement doesn’t vanish when frameworks evolve.

Do We Still Need an Agile Leader?

Whether following Scrum or as organizations transition to Kanban or hybrid flow models, many are eliminating Agile leadership roles. Responsibilities once owned by a Scrum Master or Agile Coach are now:

  • absorbed by Engineering Managers,
  • distributed across team members, or
  • elevated to Program Management.

On paper, this looks efficient. In reality, it often creates a gap because no one is explicitly accountable for maintaining flow, team health, and continuous improvement.

The Role’s Evolution and Its Reputation

Over time, the Scrum Master evolved into roles such as Agile Coach, Agile Leader, or Agile Delivery Manager (ADM) leaders who:

  • coached flow and sustainability,
  • resolved cross-team dependencies,
  • championed experimentation and team health, and
  • used flow metrics to surface bottlenecks and team delivery performance.
  • connect delivery initiatives or epics with business outcomes.

These were not meeting schedulers. They were system stewards, enabling teams to deliver effectively and sustainably.

Unfortunately, the role’s reputation suffered as the industry scaled too fast. The explosion of two-day certification courses created an influx of “certified experts” with little experience. Many were placed in impossible positions, expected to transform organizations without the authority or mentorship to succeed. Some individuals grew into exceptional Agile leaders, while others struggled.

The uneven quality left leaders skeptical. That’s not a failure of the role itself, but a byproduct of how quickly Agile became commercialized.

When the Role Disappears (or Gets Folded Into Management)

In some organizations, the Agile leadership role has been absorbed by Engineering Managers. On paper, this simplifies accountability and structure. In practice, it creates new trade-offs:

  • Overload: Engineering Managers juggle hiring, technical design and strategy, people development, and implementation oversight. Adding Agile facilitation stretches them thin.
  • Loss of neutrality: It’s hard to be both coach and evaluator. Psychological safety and open reflection suffer.
  • Reduced focus: Good Agile leaders specialize in flow, metrics, and process improvement. Those responsibilities often fade when combined with other priorities.

I’m watching this shift happen in real time. In one organization that removed its Agile leaders, Engineering Managers now coordinate ceremonies and metrics while trying to sustain alignment. The administrative tasks are covered, but continuous improvement and team sentiment have slipped out of focus. There’s only so much one role can absorb before something important gives way.

These managers, once deeply technical and people-oriented, now find themselves stretched across too many competing responsibilities. It’s still early, but the question isn’t whether meetings happen; it’s whether performance, flow, and engagement can sustain without a separate role dedicated to nurturing them.

Redistribution to Program Management

Some of the higher-level coaching and metrics work has moved into Program Management. Many program managers at this organization hold Scrum Master certifications and act as advisors to Engineering Managers, while maintaining flow metrics and ensuring value stream visibility.

It’s a reasonable bridge, but scale limits its impact. A single program manager may support six to eight teams, focusing only on the most critical issues. The broader discipline of continuous improvement, including reviewing flow data, addressing bottlenecks, or mapping value streams, risks fading when no one on the team is closely involved.

Distributing or Rotating Responsibilities

Some teams attempt to share Agile responsibilities: rotating facilitators, distributing meeting ownership, or collectively tracking metrics. It’s a well-intentioned model that works for mature, stable teams, but it has limits.

  • Frequent rotation breaks continuity and learning.
  • Coaching depth is lost when no one develops mastery.
  • Under delivery pressure, improvement tasks fall to the bottom of the list.

Distributed ownership can work in bursts, but it rarely sustains long-term improvement. Someone still needs to own the system, even if the title is gone.

Leadership Mindsets Define Success

Whether an organization retains or removes Agile leaders often comes down to mindset.

Execution-First Leadership (Command & Control):

  • Believes delivery can be managed through structure and accountability.
  • Sees facilitation and coaching as overhead.
  • Accepts distributed ownership as “good enough.”

Systems-Enabling Leadership (Servant / Flow):

  • Believes facilitation and improvement require focus and skill.
  • Invests in Agile leaders to strengthen flow and collaboration.
  • Sees distributed responsibility as a step, not a destination.

Neither model is inherently wrong; they reflect different views on how improvement happens. But experience shows a clear trade-off: when continuous improvement is one of many responsibilities, it often becomes no one’s priority. A dedicated Agile leader keeps that focus alive; an overloaded manager rarely can for long. The key is designing a system where improvement has space to breathe, not just another task on an already full plate.

The Myth of the Unicorn

When organizations integrate Agile leadership into engineering management or product management, they often create “unicorns”-individuals expected to possess both deep core skills and be effective leaders, delivery owners, and process coaches simultaneously.

Those who can do this well are rare, and even they struggle with constant task-switching across competing priorities. When these high performers leave, the organization loses more than a person; it loses context, flow awareness, and continuity. Replacing them is difficult; few candidates in the market combine such a broad mix of technical, leadership, and coaching skills.

Scrum, Kanban, and What Doesn’t Change

Practices evolve. Scrum remains widely used, but many teams operate in Kanban or hybrid systems. The shift to continuous delivery doesn’t eliminate the need for Agile leadership; if anything, it heightens it.

As work becomes more distributed and complex, teams still need a steward of flow and feedback. Frameworks differ; however, the function that enables collaboration and systemic improvement remains the same.

The Path Forward: Protect the Capability, Not the Title

Instead of asking, “Should we bring Scrum Masters back?” leaders should be asking a more fundamental question:

Who in our organization is responsible for enabling collaboration, removing impediments, promoting improvement, maintaining team health, and driving systemic learning?

If the answer is “no one,” it doesn’t matter what you call the role; you have a gap.

If the answer is “partially someone (rotated or shared),” acknowledge the compromise, the diffusion of ownership, and a loss of focus, and revisit it as the organization matures.

Agile will continue to exist with or without a dedicated Scrum Master or Agile Leader. Frameworks evolve, but the principles, small batches, fast feedback, and empowered teams remain the same. Having a dedicated role strengthens a team’s ability to apply those principles consistently. Without one, Agile doesn’t vanish, but performance and improvement discipline often do.

The point isn’t about losing Agile practices; it’s about the risk of losing stewardship. Without it, the habits that once drove learning and improvement fade, and teams can inevitably slide back toward the rigid, hierarchical models Agile set out to change.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


Related Reading

If this topic resonated with you, you may find these articles valuable as complementary perspectives:

  • From Scrum Master to Agile Delivery Manager: Evolution in the Age of Flow
    Explores how the Agile leadership role evolved beyond facilitation to become a strategic driver of flow and measurable outcomes.
  • Why Cutting Agile Leadership Hurts Teams More Than It Saves
    Examines the long-term cultural and performance costs organizations face when eliminating roles dedicated to continuous improvement.
  • Mindsets That Shape Software Delivery Team Structures
    Highlights how leadership philosophies, command-and-control versus systems-enabling, determine whether teams thrive or stall.

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

From Two Pizzas to One: How AI Reshapes Dev Teams

October 2, 2025 by philc

Exploring how AI could reshape software teams, smaller pods, stronger guardrails, and the balance between autonomy and oversight.

7 min read

For more than two decades, Jeff Bezos’s “two-pizza team” rule has been shorthand for small, effective software teams: a group should be small enough that two pizzas can feed them, typically about 5–10 people. The principle is simple: fewer people means fewer communication lines, less overhead, and faster progress. The math illustrates this well: 10 people create 45 communication channels, while four people create just six. Smaller groups spend less time coordinating, which often leads to faster outcomes.

This article was sparked by a comment at this year’s Enterprise Technology Leadership Summit. A presenter suggested that AI could soon reshape how we think about team size. That got me wondering: what would “one-pizza teams” actually look like if applied to enterprise-grade systems where resilience, compliance, and scalability are non-negotiable?

The Hype: “Do We Even Need Developers?”

In recent months, I’ve heard product leaders speculate that AI might make developers optional. One senior product manager even suggested, half-seriously, that “we may not need developers at all, since AI can write code directly.” On the surface, that sounds bold. But in reality, it reflects limited hands-on experience with the current tools. Generating a demo or prototype with AI is one thing; releasing code into a production system, supporting high-volume, transactional workloads with rollback, observability, and compliance requirements, is another. It’s easy to imagine that AI can replace developers entirely until you’ve lived through the complexity of maintaining enterprise-grade systems.

I’ve also sat in conversations with CTOs and VPs excited about the economics. AI tools, after all, look cheap compared to fully burdened human salaries. On a spreadsheet, reducing teams of 8–12 engineers down to one or two may appear to unlock massive savings. But here again, prototypes aren’t production, and what looks good in theory may not play out in practice.

The Reality Check

The real question isn’t whether AI eliminates developers, it’s how it changes the balance between humans, tools, and team structure. While cost pressures may tempt leaders to shrink teams, the more compelling opportunity may be to accelerate growth and innovation. AI could enable organizations to field more small teams in parallel, modernize multiple subdomains simultaneously, deliver features faster, and pivot quickly to outpace their competitors.

Rather than a story of headcount reduction, one-pizza teams could become a story of capacity expansion, with more teams and a broader scope, all while maintaining the same or slightly fewer people. But this is still, to some extent, a crystal ball exercise. None of us can predict with certainty what teams will look like in three, five, or ten years. What seems possible today is that AI enables smaller pods to take on more responsibility, provided we approach this shift with caution and discipline.

Why AI Might Enable Smaller Teams

AI’s value in this context comes from how it alters the scope of work for each developer.

Hygiene at scale. Practices that teams often defer, such as tests, documentation, release notes, and refactors, can be automated or continuously maintained by AI. Quality could become less negotiable and more baked into the process.

Coordination by contract. AI works best when given context. PR templates, paved roads, and CI/CD guardrails provide part of that. But so do rule files, lightweight markdown contracts such as cursor_rules.md or claude.md that encode expectations for test coverage, security practices, naming conventions, and architecture. These files give AI the boundaries it needs to generate code that aligns with team standards. Over time, this could transform AI from a generic assistant into a domain-aware teammate.

Broader scope. With boilerplate and retrieval handled by AI, a small pod might own more of the vertical stack, from design to deployment, without fragmenting responsibilities across multiple groups.

Reduced overhead. Acting as a shared memory and on-demand research partner, AI can minimize the need for lengthy meetings or additional specialists. Coordination doesn’t disappear, but some of the lower-value overhead could shrink.

From Efficiency to Autonomy

The promise isn’t simply in productivity gains per person; it may lie in autonomy. AI could provide small pods with enough context and tooling to operate independently. This autonomy might enable organizations to spin up more one-pizza teams, each capable of covering a subdomain, reducing technical debt, delivering features, or running experiments. Instead of doing the same work with fewer people, companies might do more work in parallel with the same resources.

How Roles Could Evolve

If smaller teams become the norm, roles may shift rather than disappear.

  • Product Managers could prototype with AI before engineers write code, run quick user tests, and even handle minor fixes.
  • Designers might use AI to generate layouts while focusing more on UX research, customer insights, and accessibility.
  • Engineers may be pushed up the value chain, from writing boilerplate to acting as architects, integrators, and AI orchestrators. This creates a potential career pipeline challenge: if AI handles repetitive tasks, how will junior engineers gain the depth needed to become tomorrow’s architects?
  • QA specialists can transition from manual testing to test strategy, utilizing AI to accelerate execution while directing human effort toward edge cases.
  • New AI-native roles, such as prompt engineers, context engineers, AI QA, or solutions architects, may emerge to make AI trustworthy and enterprise-aligned.

In some cases, the traditional boundaries between product, design, and engineering could blur further into “ProdDev” pods, teams where everyone contributes to both the vision and the execution.

The Enterprise Reality

Startups and greenfield projects may thrive with tiny pods or even solo founders leveraging AI. But in enterprise environments, complexity doesn’t vanish. Legacy systems, compliance, uptime, and production support continue to require human oversight.

One-pizza pods might be possible in select domains, but scaling them down won’t be simple. Where it does happen, success may depend on making two human hats explicit:

  • Tech Lead – guiding design reviews, threat modeling, performance budgets, and validating AI output.
  • Domain Architect – enforcing domain boundaries, compliance, and alignment with golden paths.

Even then, these roles rely on shared scaffolding:

  • Production Engineering / SRE  -managing incidents, SLOs, rollbacks, and noise reduction.
  • Platform Teams – providing paved roads like IaC modules, service templates, observability baselines, and policy-as-code.

The point isn’t that enterprises can instantly shrink to one-pizza teams, but that AI might create the conditions to experiment in specific contexts. Human judgment, architecture, and institutional scaffolding remain essential.

Guardrails and Automation in Practice

For smaller pods to succeed, standards need to be non-negotiable. AI may help enforce them, but humans must guide the judgment.

Dual-gate reviews. AI can run mechanical checks, while humans approve architecture and domain impacts.

Evidence over opinion. PRs should include artifacts, tests, docs, and performance metrics, so reviews are about validating evidence, not debating opinions.

Security by default. Automated scans block unsafe merges.

Rollback first. Automation should default to rollback, with humans approving fixing forward.

Toil quotas. Reducing repetitive ops work quarter by quarter keeps small teams sustainable.

Beyond CI, AI can also shape continuous delivery by optimizing pipelines, enforcing deployment policies, validating changes against staging telemetry, and even self-healing during failures.

What’s Real vs. Wishful Thinking (2025)

AI is helping, but unevenly. Gains emerge when organizations re-architect workflows end-to-end, rather than layering AI on top of existing processes.

Quality and security remain human-critical. Studies suggest a high percentage of AI-generated code carries vulnerabilities. AI may accelerate output, but without human checks, it risks accelerating flaws.

AI can make reviews more efficient by summarizing diffs and flagging issues, but final approval still requires human judgment on architecture and risk.

And production expectations haven’t changed. A 99.99% uptime commitment still allows only 15 minutes of downtime per quarter. Even if AI can help remediate, humans remain accountable for those calls.

Practitioner feedback is also worth noting. In conversations with developers and business users of AI, most of whom are still in their first year of adoption, the consensus is that productivity gains are often inflated. Some tasks are faster with AI, while others require more time to manage context. Most people view AI as a paired teammate, rather than a fully autonomous agent that can build almost everything in one or two shots.

Challenges to Consider

Workforce disruption. If AI handles more routine work, some organizations may feel pressure to reduce the scope of specific roles. Whether that turns into cuts or an opportunity to reskill may depend on leadership choices.

Mentorship and pipeline. Junior engineers once learned by doing the work AI now accelerates. Without intentional design of new learning paths, we may risk a gap in the next generation of senior engineers.

Over-reliance. AI is powerful but not infallible. It can hallucinate, generate insecure code, or miss subtle regressions. Shrinking teams too far might leave too few human eyes on critical paths.

A Practical Checklist

  • Product risk: 99.95%+ SLOs or regulated data? Don’t shrink yet.
  • Pager noise: <10 actionable alerts/week and rollback proven? Consider shrinking.
  • Bus factor: ≥3 engineers can ship/release independently? Consider shrinking.
  • AI Maturity: Are AI Checks and PR Evidence Mandatory? Consider shrinking.
  • Toil trend: Is toil tracked and trending down? Consider shrinking.

Bottom Line

AI may make one-pizza teams possible, but only if automation carries the repetitive workload, humans maintain judgmental oversight, and guardrails ensure standards. Done thoughtfully, smaller pods don’t mean scarcity; they can mean focus.

And when organizations multiply these pods across a portfolio, the outcome might not just be sustaining velocity but accelerating it: more features, faster modernization, shorter feedback loops, and quicker pivots against disruption.

This is the story of AI in team structure, not doing the same with less, but doing more with the same.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

AI in Software Delivery: Targeting the System, Not Just the Code

August 9, 2025 by philc

7 min read

This article is a follow-up to my earlier post, AI Is Improving Software Engineering. But It’s Only One Piece of the System. In that post, I explored how AI is already helping engineering teams work faster and better, but also why those gains can be diminished if the rest of the delivery system lags.

Here, I take a deeper look at that system-wide perspective. Adopting AI is about strengthening the entire system. We need to think about AI not only within specific teams but across the organizational level, ensuring its impact is felt throughout the value stream.

AI has the potential to improve how work flows through every part of our delivery system: product, QA, architecture, platform, and even business functions like sales, marketing, legal, and finance.

If you already have robust delivery metrics, you can pinpoint exactly where AI will have the most impact, focusing its efforts on the actual constraints rather than “speeding up” work at random. But for leaders who don’t yet have a clear set of system metrics and are still under pressure to show AI’s return on investment, I strongly recommend starting with a platform or framework that captures system delivery performance.

In my previous articles, I’ve outlined the benefits of SEI (Software Engineering Intelligence) tools, DORA metrics (debatable), and, ideally, Value Stream Management (VSM) platforms. These solutions measure and visualize delivery performance across the system, tracking indicators like cycle time, throughput, quality, and stability. They help you understand your current performance and also enable you to attribute improvements, whether from AI adoption or other changes, to specific areas of your workflow. Selecting the right solution depends on your organizational context, team maturity, and goals, but the key is having a measurement foundation before you try to quantify AI’s impact.

The Current Backlash and Why We Shouldn’t Overreact

Recent research and commentary have sparked a wave of caution around AI in software engineering.

A controlled trial by METR (2025) found that experienced developers using AI tools on their repositories took 19% longer to complete tasks than without AI, despite believing they were 20% faster. The 2024 DORA report found similar patterns: a 25% increase in AI adoption correlated with a 1.5% drop in delivery throughput and a 7.2% decrease in delivery stability. Developers felt more productive, but the system-level metrics told another story.

Articles like AI Promised Efficiency. Instead, It’s Making Us Work Harder (Afterburnout, n.d.) point to increased cognitive load, context switching, and the need for constant oversight of AI-generated work. These findings have fed a narrative that AI “isn’t working” or is causing burnout.

But from my perspective, this moment is less about AI failing and more about a familiar pattern: new technology initially disrupts before it levels up those who learn to use it well. The early data reflects an adoption phase, not the end state.

Our Teams’ Approach

Our organization is embracing an AI-first culture, driven by senior technology leadership and, additionally, senior engineers who are leading the charge, innovating, experimenting, and mastering the latest tools and LLMs. However, many teams are earlier in their adoption journey and can feel intimidated by these pioneers. In our division, my focus is on encouraging, training, and supporting engineers to adopt AI tools, gain hands-on experience, explore use cases, and identify gaps. The goal isn’t immediate mastery but building the skills and confidence to use these tools effectively over time.

Only after sustained, intentional use, months down the line, will we have an informed, experienced team that can provide meaningful feedback on the actual outcomes of adoption. That’s when we’ll honestly know where AI is moving the needle, and where it isn’t.

How I Respond When Asked “Is AI Working?”

This approach is inspired by Laura Tacho, CTO at DX, and her recent presentation at LeadDev London, How to Cut Through the Hype and Measure AI’s Real Impact (Tacho, 2025). As a leader, when I face the “how effective is AI?” debate, I ground my answer in three points:

1. How are we performing

We measure our system performance with the same Flow Metrics we used before AI: quality, stability, time-to-value, and other delivery health indicators. We document any AI-related changes to the system, tools, or workflows so we can tie changes in metrics back to their potential causes.

2. How AI is helping (or not helping)

We track where AI is making measurable improvements, where it’s neutral, and where it may be introducing new friction. This is about gaining an honest understanding of where AI is adding value and where it needs refinement.

3. What will we do next

Based on that data and team feedback, we adjust. We expand AI use where it’s working, redesign where it’s struggling, and stay disciplined about aligning AI experiments to actual system constraints.

This framework keeps the conversation grounded in facts, not hype, and shows that our AI adoption strategy is deliberate, measurable, and responsive.

What System Are We Optimizing?

When I refer to “the system,” I mean the structure and process by which ideas flow through our organization, become working software, and deliver measurable value to customers and the business.

Using a Value Stream Management and Product Operating Model approach together gives us that view:

  • Value stream: the whole journey of work from ideation to delivery to customer realization, including requirements, design, build, test, deploy, operate, and measure.
  • Product operating model: persistent, cross-functional teams aligned to products that own outcomes across the lifecycle.

Together, these models reveal not just who is doing the work, but how it flows and where the friction is. That’s where AI belongs, improving flow, clarity, quality, alignment, and feedback across the system.

The Mistake Many Are Making

Too many organizations inject AI into the wrong parts of the system, often where the constraint isn’t. Steve Pereira’s It’s time for AI to meet Flow (Pereira, 2025) captures it well: more AI output can mean more AI-supported rework if you’re upstream or downstream of the actual bottleneck.

This is why I believe AI must be tied to flow improvement:

  1. Make the work visible – Map how work moves, using both our existing metrics and AI to visualize queues, wait states, and handoffs.
  2. Identify what’s slowing it down – Use flow metrics like cycle time, WIP, and throughput to find constraints before applying AI.
  3. Align stakeholders – AI can synthesize input from OKRs, roadmaps, and feedback, so we’re solving the right problems.
  4. Prototype solutions quickly – Targeted, small-scale AI experiments validate whether a constraint can be relieved before scaling.

Role-by-Role AI Adoption Across the Value Stream

AI isn’t just for software engineers, it benefits every role on your cross-functional team. Here are just a few examples of how it can make an impact. There are many more ways for each role than listed below.

Product Managers / Owners

  • Generate Product Requirements Documentation
  • Analyze customer, market, and outcome metrics
  • Groom backlogs, draft user stories, and acceptance criteria.
  • Summarize customer feedback and support tickets.
  • Use AI to prepare for refinement and planning.

QA Engineers

  • Generate test cases from acceptance criteria or code diffs.
  • Detect coverage gaps and patterns in flaky tests.
  • Summarize PR changes to focus testing.

Domain Architects

  • Visualize system interactions and generate diagrams.
  • Validate design patterns and translate business rules into architecture.

Platform Teams

  • Generate CI/CD configurations.
  • Enforce architecture and security standards with automation.
  • Identify automation opportunities from delivery metrics.

InfoSec Liaisons

  • Scan commits and pull requests (PRs) for risky changes.
  • Draft compliance evidence from logs and release data.

Don’t Forget the Extended Team

Sales, marketing, legal, and finance all influence the delivery flow. AI can help here, too:

  • Sales: Analyze and generate leads, summarize customer engagements, and highlight trends for PMs.
  • Marketing: Draft launch content from release notes.
  • Legal: Flag risky language, summarize new regulations.
  • Finance: Model ROI of roadmap options, forecast budget impact.

Risk and Resilience

What happens when AI hits limits or becomes unavailable? Inference isn’t free; costs will rise, subsidies will fade, and usage may be capped. Do you have fallback workflows, maintain manual expertise, and measure AI’s ROI beyond activity? Another reason for us to gain experience with these tools is to improve our efficiency and understand usage patterns.

The Opportunity

We already have the data to see how our system performs. The real opportunity is to aim AI at the constraints those metrics reveal, removing friction, aligning teams, and improving decision-making. If we take the time to learn the tools now, we’ll be ready to use them where they matter most.

What Now?

We already have the metrics to see how our system performs. The real opportunity is to apply AI purposefully across the full lifecycle, from ideation and design, through development, testing, deployment, and into operations and business alignment. By directing AI toward the right constraints, we eliminate friction, unify our teams around clear metrics, and elevate decision-making at every step.

Yes, AI adoption is a learning journey. We’ll stumble, experiment, and iterate, but with intention, measurement, and collaboration, we can turn scattered experiments into a sustained competitive advantage. AI adoption is about transforming or improving the system itself.

AI isn’t failing, it’s maturing. We’re on the rise of the adoption curve. Our challenge and opportunity is to build the muscle and culture to deploy AI across the lifecycle, turning today’s experiments into tomorrow’s engineered advantage.

For anyone still hesitant, know this: AI isn’t going away. Whether it slows us down or speeds us up, we must learn to use it well, or we risk being left behind. Let’s learn. Let’s measure. Let’s apply AI where it’s most relevant and learn to understand its current benefits and limitations. There’s no going back, only forward.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

Afterburnout. (n.d.). AI promised efficiency. Instead, it’s making us work harder. Afterburnout. https://afterburnout.co/p/ai-promised-to-make-us-more-efficient

Clark, P. (2025, July). AI is improving software engineering. But it’s only one piece of the system. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/07/ai-is-improving-software-engineering-but-its-only-one-piece-of-the-system/

METR. (2025, July 10). Measuring the impact of early-2025 AI on experienced open-source developer productivity. METR. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Pereira, S. (2025, August 8). It’s time for AI to meet flow: Flow engineering for AI. Steve Pereira. https://stevep.ca/its-time-for-ai-to-meet-flow/

State of DevOps Research Program. (2024). 2024 DORA report. Google Cloud / DORA. (Direct URL to the report as applicable)

Tacho, L. (2025, June). How to cut through the hype and measure AI’s real impact. Presentation at LeadDev London.  https://youtu.be/qZv0YOoRLmg?si=aMes-VWyct_DEWz0

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

AI Is Improving Software Engineering. But It’s Only One Piece of the System

July 31, 2025 by philc

5 min read

A follow-up to my last post Leading Through the AI Hype in R&D, this piece explores how strong AI adoption still needs system thinking, responsibility, and better leadership focus.

Leaders are moving fast to adopt AI in engineering. The urgency is real, and the pressure is growing. But many are chasing the wrong kind of improvement, or rather, focusing too narrowly.

AI is transforming software engineering, but it addresses only one part of a much larger system. Speeding up code creation doesn’t solve deeper issues like unclear requirements, poor architecture, or slow feedback loops, and in some cases, it can amplify dysfunction when the system itself is flawed.

Engineers remain fully responsible for what they ship, regardless of how the code is written. The real opportunity is to increase team capacity and deliver value faster, not to reduce cost or inflate output metrics.

The bigger risk lies in how senior leaders respond to the hype. When buzzwords instead of measurable outcomes drive expectations, focus shifts to the wrong problems. AI is a powerful tool, but progress requires leadership that stays grounded, focuses on system-wide improvement, and prioritizes accountability over appearances.

A team member recently shared Writing Code Was Never the Bottleneck by Ordep. It cut through the noise. Speeding up code writing doesn’t solve the deeper issues in software delivery. That article echoed what I’ve written and experienced myself. AI helps, but not where many think it does, “currently”.

This post builds on my earlier post, Leading Through the AI Hype in R&D That post challenged hype-driven expectations. This one continues the conversation by focusing on responsibility, measurement, and real system outcomes.

Code Implementation Is Rarely the Bottleneck

Tools like Copilot, Claude Code, Cursor, Devon, … can help developers write code faster. But that’s not where most time is lost.

Delays come from vague requirements, missing context, architecture problems, slow reviews, and late feedback. Speeding up code generation in that environment doesn’t accelerate delivery. It accelerates dysfunction.

I Use AI in My Work

I’ve used agentic AI and tools to implement code, write services, and improve documentation. It’s productive. But it takes consistent reviews. I’ve paused, edited, and rewritten plenty of AI-generated output.

That’s why I support adoption. I created a tutorial to help engineers in my division learn to use AI effectively. It saves time. It adds value. But it’s not automatic. You still need structure, process, and alignment.

Engineers Must Own Impact, Not Just Output

Using AI doesn’t remove responsibility. Engineers are still accountable for what their code does once it runs.

They must monitor quality, performance, cost, and user impact. AI can generate a function. But if that function causes a spike in memory usage or breaks under scale, someone has to own that.

I covered this in Responsible Engineering: Beyond the Code – Owning the Impact. AI makes output faster. That makes responsibility more critical, not less. Code volume isn’t the goal. Ownership is.

Code Is One Step in a Larger System

Software delivery spans more than development. It includes discovery, planning, testing, release, and support. AI helps one step. But problems often live elsewhere.

If your system is broken before and after the code is written, AI won’t help. You need to fix flow, clarify ownership, and reduce friction across the whole value stream.

Small Teams Increase Risk Without System Support

Some leaders believe AI allows smaller teams to do more. That’s only true if the system around them improves too.

Smaller teams carry more scope. Cognitive load increases. Knowledge becomes harder to spread. Burnout rises.

Support pressure also grows. The same few experts get pulled into production issues. AI doesn’t take the call. It doesn’t debug or triage. That load falls on people already stretched thin.

When someone leaves, the risk is bigger. The team becomes fragile. Response times are slow. Delivery slips.

The Hard Part Is Not Writing the Code

One of my engineers said it well. Writing code is the easy part. The hard part is designing systems, maintaining quality, onboarding new people, and supporting the product in production.

AI helps with speed. It doesn’t build understanding.

AI Is a Tool. Not a Strategy

I support using AI. I’ve adopted it in my work and encourage others to do the same. But AI is a tool. It’s not a replacement for thinking.

Use it to reduce toil. Use it to improve iteration speed. But don’t treat it as a strategy. Don’t expect it to replace engineering judgment or improve systems on its own.

Some leaders see AI as a path to reduce headcount. That’s short-sighted. AI can increase team capacity. It can help deliver more features, faster. That can drive growth, expand market share, and increase revenue. The opportunity is to create more value, not simply lower cost.

The Metrics You Show Matter

Senior leaders face pressure to show results. Investors want proof that AI investments deliver value. That’s fair.

The mistake is reaching for the wrong metrics. Commit volume, pull requests, and code completions are easy to inflate with AI. They don’t reflect real outcomes.

This is where hype causes harm. Leaders start chasing numbers that match the story instead of measuring what matters. That weakens trust and obscures the impact.

If AI is helping, you’ll see a better flow. Fewer delays. Faster recovery. More predictable outcomes. If you’re not measuring those things, you’re missing the point.

AI Is No Longer Optional

AI adoption in software development is no longer a differentiator. It’s the new baseline.

Teams that resist it will fall behind. No investor would approve a team using hammers when nail guns are available. The expectation is clear. Adopt modern tools. Deliver better outcomes. Own the results.

What to Focus On

If you lead AI adoption, focus on the system, not the noise.

  • Improve how work moves across teams
  • Reduce delays between steps
  • Align teams on purpose and context
  • Use AI to support engineers, not replace them
  • Measure success with delivery metrics, not volume metrics
  • Expect engineers to own what they ship, with or without AI

You don’t need more code. You need better outcomes. AI can help, but only if the system is healthy and the people are accountable.

The hype will keep evolving. So will the tools. But your responsibility is clear. Focus on what’s real, what’s working, and what delivers value today.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Clark, Phil. Leading Through the AI Hype in R&D. Rethink Your Understanding. July 2025. Available at: https://rethinkyourunderstanding.com/2025/07/leading-through-the-ai-hype-in-rd
  2. Ordep. Writing Code Was Never the Bottleneck. Available at: https://ordep.dev/posts/writing-code-was-never-the-bottleneck
  3. Clark, Phil. Responsible Engineering: Beyond the Code – Owning the Impact. Rethink Your Understanding. March 2025. Available at: https://rethinkyourunderstanding.com/2025/03/responsible-engineering-beyond-the-code-owning-the-impact

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Metrics, Product Delivery, Software Engineering

Leading Through the AI Hype in R&D

July 27, 2025 by philc

7 min read

Note: AI is evolving rapidly, transforming workflows faster than expected. Most of us can’t predict how quickly or to what level AI will change our teams or workflow. My focus for this post is on the current state, pace of change, and the reality vs hype at the enterprise level. I promote the adoption of AI and encourage every team member to embrace it.

I’ve spent the past few weeks deeply immersed in “vibe coding” and experimenting with agentic AI tools during my nights and weekends, learning how specialized agents can orchestrate like real product teams when given proper context and structure. But in my day job as a senior technology leader, the tone shifts. I’ve found myself in increasingly chaotic meetings with senior leaders, chief technology officers, chief product officers, and engineering VPs, all trying to out-expert each other on the transformative power of AI on product and development (R&D) teams.

The energy often feels like a pitch room, not a boardroom. Someone declares Agile obsolete. Another suggests we can replace six engineers with AI agents. A few toss around claims of “30× productivity.” I listen, sometimes fascinated, often frustrated, at how quickly the conversation jumps to conclusions without asking the right questions. More troubling, many of these executives are under real pressure from investors and ownership to show ROI. If $1M is spent on AI adoption, how do we justify the return? What metrics will we use to report back?

Hearing the Hype (and Feeling the Exhaustion)

One executive confidently declared, “Agile and Lean are dead,” citing the rise of autonomous AI agents that can plan, code, test, and deploy without human guidance. His opinion echoed a recent blog post, Agile Is Dead: Long Live Agentic Development, which criticized Agile rituals like daily stand-ups and sprints as outdated and encouraged teams to let agents take over the workflow¹. Meanwhile, agile coaches argue that bad Agile, not Agile itself, is the real problem, and that AI can strengthen Agile if applied thoughtfully.

The hype escalates when someone shares stories of high-output engineering from one of the senior developers, keeping up with AI capabilities: 70 AI-assisted commits in a single night, barely touching the keyboard. Another proposes shrinking an 8-person team to just two engineers, one writing prompts and one overseeing quality, as the AI agents do the rest. These stories are becoming increasingly common, especially as research suggests that AI can dramatically reduce the number of engineers needed for many projects². Elad Gil even claimed most engineering teams could shrink by 5×–10×.

But these same reports caution against drawing premature conclusions. They warn that while AI enables productivity gains, smaller teams risk creating knowledge silos, reduced quality, and overloading the remaining developers². Other sources echo this risk: Software Engineering Intelligence (SEI) tools have flagged increased fragility and reduced clarity in AI-generated code when review practices and documentation are lacking³.

What If We’re Already Measuring the Right Things?

While executives debate whether Agile is dead, I find myself thinking: we already have the tools to measure AI’s impact, we just need to use them.

In my organization’s division, we’ve spent years developing a software delivery metrics strategy centered on Value Stream Management, Flow Metrics, and team sentiment. These metrics already show how work flows through the system, from idea to implementation to value. They include:

  • Flow metrics like distribution, throughput, time, efficiency, and load
  • Quality indicators like change failure rate and security defect rate
  • Sentiment and engagement data from team surveys
  • Outcome-oriented metrics like anticipated outcomes and goal (OKR) alignment

Recently, I aligned our Flow Metrics with the DX Core 4 Framework⁴ matrix, organizing them into four key categories: speed, effectiveness, quality, and impact. We made these visual and accessible, using this clear chart to show how each metric relates to delivery health. These metrics don’t assume Agile is obsolete or that AI is the solution. They track how effectively our teams are delivering value.

So when senior leaders asked, “How will we measure AI’s impact?” I reminded them, we already are. If AI helps us move faster, we’ll see it in flow time. If it increases capacity, we’ll see it in throughput (flow velocity). If it maintains or improves quality, our defect rates and sentiment scores will reflect that. The same value stream lens that shows us where work gets stuck will also reveal whether AI helps us unstick it.

Building on Existing Metrics: The AI Measurement Framework

Instead of creating an entirely new system, I layered an existing AI Measurement Framework on top of our existing performance metrics⁵. This format includes three categories:

  1. Utilization:
    • % of AI-generated code
    • % of developers using AI tools
    • Frequency of AI-agent use per task
  2. Impact:
    • Changes in flow metrics (faster cycle time)
    • Developer satisfaction or frustration
    • Delivered value per team or engineer
  3. Cost:
    • Time saved vs. licensing and premium token cost
    • Net benefit of AI subscriptions or infrastructure

This approach answers the following questions: Are developers using AI tools? Does that usage make a measurable difference? And does the difference justify the investment?

In a recent leadership meeting, someone asked, “What percentage of our engineers are using AI to check in code?” That’s an adoption metric, not a performance one. Others have asked whether we can measure AI-generated commits per engineer to report to the board. While technically feasible with specific developer tools, this approach risks reinforcing vanity metrics that prioritize motion over value. Without impact and ROI metrics, adoption alone can lead to gaming behavior, and teams might flood the system with low-value tasks to appear “AI productive.” What matters is whether AI is helping us delivery better, faster, and smarter.

I also recommend avoiding vanity metrics, such as lines of code or commits. These often mislead leaders into equating motion with value. Many vendors boast “AI wrote 50% of our code,” but as developer-experience researcher Laura Tacho explains, this usually counts accepted suggestions, not whether the code was modified, deleted, or even deployed.⁵ We must stay focused on outcomes, not outputs.

The Risk of Turning AI into a Headcount Strategy

One of the more concerning trends I’m seeing is the concept of “headcount conversion,” which involves reducing team size and utilizing the savings to fund enterprise AI licenses. If seven people can be replaced by two and an AI license, along with a premium token budget, some executives argue, then AI “pays for itself.” However, this assumes that AI can truly replace human capability and that the work will maintain its quality, context, and business value.

That might be true for narrow, repeatable tasks, or small organizations or startups struggling with costs and revenue. But it’s dangerous to generalize. AI doesn’t hold tribal knowledge, coach junior teammates, or understand long-term trade-offs. It’s not responsible for cultural dynamics, systemic thinking, or ethical decisions.

Instead of shrinking teams, we should consider expanding capacity. AI can help us do more with the same people. Developer productivity research indicates that engineers typically reinvest AI-enabled time savings into refactoring, enhancing test coverage, and implementing cross-team improvements², which compounds over time into stronger, more resilient software.

Slowing Down to Go Fast

Leaving those leadership meetings, I felt a mix of energy and exhaustion. Many people wanted to appear intelligent, but few were asking thoughtful questions. We were racing toward solutions without clarifying what problem we were solving or how we’d measure success.

So here’s my suggestion: Let’s slow down. Let’s agree on how we’ll track the impact of AI investments. Let’s integrate those measurements into systems we already trust. And let’s stop treating AI as a replacement for frameworks that still work; instead, let’s use it as a powerful tool that helps us deliver better, faster, and with more intention.

AI isn’t a framework. It’s an accelerator. And like any accelerator, it’s only valuable if we’re steering in the right direction.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Leschorn, J. (2025, May 29). Agile Is Dead: Long Live Agentic Development. Superwise. https://superwise.ai/blog/agile-is-dead-long-live-agentic-development/
  2. Ameenza, A. (2025, April 15). The New Minimum Viable Team: How AI Is Shrinking Software Development Teams. https://anshadameenza.com/blog/technology/ai-small-teams-software-development-revolution/
  3. Circei, A. (2025, March 13). Measuring AI in Engineering: What Leaders Need to Know About Productivity, Risk and ROI. Waydev. https://waydev.co/ai-in-engineering-productivity-risk-roi/
  4. Saunders, M. (2025, January 6). DX Unveils New Framework for Measuring Developer Productivity. InfoQ. https://www.infoq.com/news/2025/01/dx-core-4-framework/
  5. GetDX. (2025). Measuring AI Code Assistants and Agents. DX Research. https://getdx.com/research/measuring-ai-code-assistants-and-agents/

Filed Under: Agile, AI, Delivering Value, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Bets, Budgets, and Reframing Software Delivery as Continuous Discovery

June 7, 2025 by philc

8 min read

This post is a follow-up to my articles on estimation and product operating models, exploring how adaptive investment, value discovery, and team ownership align with Vasco Duarte’s call for agility beyond the team.

In my earlier posts, “Software Delivery Teams, Deadlines, and the Challenge of Providing Reliable Estimates” and “How Value Stream Management and Product Operating Models Complement Each Other”, I explored two core challenges that continue to hold organizations back: the illusion of predictability in software estimation, and the inefficiency of funding work through rigid project-based models. I argued that software delivery requires a shift toward probabilistic thinking, value stream alignment, and investment in products and initiatives, not fixed-scope, time-bound projects.

Software implementation and delivery estimations have been a constant theme throughout my career. Often seen as a mix of art and science, they remain highly misunderstood. While teams tend to dread them, organizations rely on them for effective planning. Despite their contentious nature, software estimations are an essential part of the process, sparking countless articles, discussions, and debates in the industry. I’m not arguing against estimation or planning. Organizations must plan. Leaders need to make investment decisions, prioritize resource allocation, and create financial forecasts. That doesn’t change. What does need to change is how we think about estimates, how we communicate their confidence, and how we act on the signals that follow.

This is a nuance that can be hard to understand unless you’ve lived both sides, delivering software inside an Agile team and leading business decisions that depend on forecasts. Estimates aren’t the enemy. One lesson I’ve learned, and others often mention, is that the real issue lies in how rigidly we stick to assumptions and how slow we are to adjust them when real-world complexities arise. What we need is to improve both how the business relies on estimates and how delivery teams develop the capability to estimate, update, and communicate confidence levels over time.

A team member recently shared notes from an Agile meetup featuring Vasco Duarte’s talk, “5 Things Destroying Your Ability to Deliver, Even When You’re Agile.” While I didn’t attend the talk, I’ve followed Vasco’s work for years. The talk referenced a 2024 podcast episode of his on Investing in Software1, which I hadn’t listened to yet, until now. That episode inspired this follow-up article.

In this episode, Vasco highlights an important point: traditional project management, often seen in boardrooms and annual plans, is based on a flawed assumption that we can predict outcomes weeks in advance and expect nothing to change. Software development, much like the weather, is unpredictable and chaotic.

Even today, many people treat software estimates as if they were comparable to predicting timelines for manufacturing physical products or managing predictable projects, such as constructing a house or bridge. They expect precision, often clinging to the initial estimate as an unyielding benchmark and holding teams accountable to it. However, software development is an entirely different realm. It’s invisible, knowledge-driven work filled with unknowns and unpredictability. In complex systems, even a small input change can trigger dramatically different outcomes. We’ve all encountered the “simple request” that unexpectedly spiraled into a significant architectural overhaul. I appreciate how Vasco ties this to Edward Lorenz’s 1961 discovery that small changes in initial conditions can lead to drastically different outcomes in weather models. That idea became the foundation of chaos theory.

Sound familiar?

In software development, we refer to this as “new work with unknowns,” “technical debt,” “rewrite,” or “refactor.” But we rarely treat it with the same respect we give to unknowns in other disciplines. Instead, we often pretend we know what we’re doing, and then demand that others commit to it. That’s the real chaos.

In addition to my focus on probability-based estimations and the Product Operating Model, Vasco’s four-point manifesto supports a shift I’ve long advocated for in team estimates and product leadership. It encourages an approach to software delivery that prioritizes adaptability, relies on real-time feedback, and views investment as an ongoing process rather than a one-time decision. This mindset isn’t about removing unpredictability but about working effectively within it.

1. From Estimates to Bets: Embracing Uncertainty with Confidence

Vasco encourages us to think like investors, not project managers. Investors expect returns, but they also accept risk and uncertainty. They recognize that not every bet pays off, and they adjust their approach accordingly based on the feedback they receive. This mindset aligns closely with how I’ve approached probabilistic estimation.

In knowledge work, “unknown unknowns” aren’t the exception. They’re the norm. You don’t just do the work, you learn what the work is along the way. What appears simple on the surface may uncover deep design flaws, coupling, or misalignment. That’s why I advocate for making estimates that improve over time, where confidence and learning signals are more important than arbitrary story point velocity.

Instead of forcing Certainty, we can ask:

“How confident are we right now?”

“What would increase or decrease that confidence?”

“Are we ready to double down, or should we pause and reassess?”

That’s what makes it a bet. And bets are revisited, not rubber-stamped.

2. Budgeting for Change, Not Certainty

The second point in Vasco’s manifesto hits close to home: fund software like you invest in the stock market, bit by bit, adjusting as you go. This reinforces what I wrote in my product operating model article: modern organizations must stop budgeting everything up front for the year and assuming the original plan will hold.

Annual planning works for infrastructure, but not innovation and knowledge work.

In a product-based funding model, teams are funded by their value stream or product, not their project deliverables estimated or guessed over a year. They receive investment to continuously discover, deliver, and evolve, reassessing value rather than completing a fixed set of scope under a dated estimate. This model gives you flexibility: invest more in what’s working, cut back where it’s not, and shift direction without resetting your entire operating plan.

3. Experiments Are the New Status Report

Vasco’s third point is deceptively simple: experiment by default. But what he’s talking about is creating adaptive intelligence at the portfolio level, not just team agility.

When we fund work incrementally and view features or epics as bets, we need signals to tell us whether to continue. In our organization, that signal often comes in the form of experiments, lightweight tests, spikes, MVPs, or “feature toggles” that generate fast feedback.

These aren’t just engineering tactics. They’re governance mechanisms.

When teams experiment, they reduce waste, increase alignment, and surface learning early. But more importantly, they feed information back into the portfolio process. A product manager might learn that a new feature doesn’t solve the core problem. A tech lead might identify a performance bottleneck before it becomes a support nightmare. A value stream might kill a half-funded initiative before it eats more cost.

Experiments give you clarity. Gantt charts provide you with theater.

4. End-to-End Ownership Enables Real Agility

The fourth point in Vasco’s manifesto is about end-to-end ownership, and it resonates deeply with how our teams are structured. When teams own their products from idea to delivery to operation, they don’t just ship; they deliver. They learn, adapt, and inform the next bet.

This kind of ownership isn’t a luxury, it’s a prerequisite to agility at scale.

In our transition to a product operating model, we restructured our delivery teams to align with value streams. We gave them clarity of purpose, full-stack capability, and autonomy to act. But what we hope to get in return isn’t just faster output; it’s better signals.

Teams close to the work produce insights you can trust. Teams trapped in delivery factories or matrixed dependencies can’t.

The Three Ways Still Apply

Listening to Vasco’s manifesto again, I was struck by how strongly it aligns with a set of principles we’ve had since at least 2021: The Three Ways, as described by Gene Kim and coauthors in The DevOps Handbook.

  • The First Way emphasizes flow and systems thinking, focusing on how value moves across the entire stream, not just within teams or silos.
  • The Second Way amplifies feedback loops, not just testing or monitoring, but real signals about whether we’re solving the right problems.
  • The Third Way advocates for a culture of continuous experimentation and learning. Accepting uncertainty, embracing risk, and using practice to gain mastery.

These are all still relevant today. But what often goes unspoken is that these principles must extend beyond the delivery teams. They must engage in planning, budgeting, prioritization, and governance.

Vasco’s idea of funding software like investments and treating initiatives as “bets” highlights the need to strengthen feedback loops across the portfolio. Experimentation has shifted from simple automated testing to focusing on strategic funding and continuous learning. Similarly, flow isn’t just about deployment pipelines anymore; it’s about speeding up the process from business decisions to tangible, measurable results.

If we’re going to embrace agility across the business truly, we must apply the Three Ways at every level of the system, especially where strategy meets funding and planning.

The Real Work, Planning for Chaos, Leading with Signals

Here’s where I’ll close, echoing Vasco’s message: the fundamental constraint in software isn’t at the team level. It’s at the leadership level, where we cling to project thinking, demand estimates without context, and build plans on the illusion of Certainty.

I strongly advocate for incorporating confidence levels and probability estimations in our organization. However, we operate on an annual funding model, planning the entire year’s operating plan, including product development investments, in advance. I hope to eventually work with product-funded budgets instead. Only time will tell. However, we can still evaluate our product development investments as we go and adjust our direction if needed.

To effectively lead a modern software organization, treat funding like an investor, not a contractor. Measure progress based on learning, not hitting milestones. Enable teams to provide actionable insights, not just reports. Structure governance models around value-driven feedback, not activity tracking.

Because you’re no longer managing projects, you’re managing bets in a chaotic system. And the sooner we stop pretending otherwise, the better our outcomes will be.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Duatre, Vasco (Host). “Xmas Special: Investing in Software: Alternatives To Project Management For Software Businesses.” December 27, 2024. Scrum Master Toolbox Podcast: Agile storytelling from the trenches [Audio podcast].  Apple Podcasts, https://podcasts.apple.com/us/podcast/scrum-master-toolbox-podcast-agile-storytelling-from/id963592988

Related Articles

  1. Software Delivery Teams, Deadlines, and the Challenge of Providing Reliable Estimates”. Phil Clark. rethinkyourunderstanding.com
  2. “How Value Stream Management and Product Operating Models Complement Each Other”. Phil Clark. rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.
Content reflects general leadership experience. Examples and details may be generalized to protect confidentiality.

  • Home
  • Mission
  • Collaboration
  • AI
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact