• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Metrics

Why Value Stream Management and the Product Operating Model Matter (and What Comes Next)

November 5, 2025 by philc

6 min read

I had the opportunity to revisit my January article and refine its key points for a recent Flowtopia.io post.

Seeing the Why Behind the Frameworks

In 2021, as part of our evolving Agile transformation, I introduced Value Stream Management (VSM) and later championed the Product Operating Model (POM). Yet I never clearly articulated why these practices mattered.

Looking back, we had already been moving toward a product-oriented model long before naming it. Cross-functional product teams operated organically but without shared governance. When capacity pressures mounted, priorities blurred and inefficiencies surfaced, showing that alignment and communication of purpose are as essential as the frameworks themselves.

Inside my own organization, alignment lagged. Technology advanced rapidly, and engineers and Agile Leaders embraced flow metrics and value-stream thinking, while the product function remained loosely engaged. Without clear accountability, the message fractured: technology optimized for flow; product managed for capacity. The gap limited our ability to realize the frameworks’ potential.

This imbalance is common. Most organizations face more work than they have capacity for, making prioritization and a focus on outcomes essential. VSM and the Product Operating Model address this directly, aligning teams, optimizing workflows, and ensuring that every hour of capacity contributes to real value.

“Adopting frameworks isn’t enough; leaders must over communicate their purpose.”

The Turning Point: When Efficiency Isn’t Enough

Every transformation reaches a moment of truth. You automate more, deploy faster, and report higher output, yet business leaders still ask, “How are our investments being utilized?”

The disconnect isn’t about effort or talent, but about visibility. Most digital organizations struggle to clearly understand how knowledge work flows or how investments in Scrum, Kanban, DevOps, automation, and now AI impact performance. Teams, in turn, can’t see how their daily work ties to customer or business outcomes.

That’s where VSM and POM intersect, two complementary frameworks that connect flow, alignment, and outcomes. Both emerged from the same realization: efficiency alone is insufficient. Without linking how value flows to what outcomes it creates, organizations risk optimizing for motion instead of progress. Sustaining expertise and funding across a product’s lifespan, rather than through short-term projects, produces better results.

From Projects to Products

For decades, technology operated as a cost center measured by utilization and velocity. Projects were funded, staffed, delivered, and dissolved. The product model reversed that logic.

By aligning long-lived teams around customer and business outcomes, organizations create real ownership and continuity. Teams become responsible not just for delivery, quality, and security, but for the outcomes they produce.

Economic accountability strengthens this model. In a product-funded operating structure, long-lived teams contribute to sales and growth, but they also influence the margins those products generate. That requires understanding more than top-line revenue. Teams should know their cost of goods sold (COGS): the direct costs, licenses, labor, implementation effort, and other team expenses that determine the actual cost of delivering and supporting the product.

When teams are evaluated on margin contribution rather than throughput or feature count, the dynamic changes. Ownership deepens. The definition of value expands. Financial discipline becomes part of everyday decision-making.

This also creates new complexity. Accountability and funding are no longer as simple as “get the code out.” They become “deliver a product customers will buy, at a margin the business can sustain.” For many organizations, this is far harder than shipping features, especially when teams are short-lived, responsibilities overlap, or cost allocations remain unclear.

But this discipline is one of the most powerful levers for turning the Product Operating Model from a framework built for speed into one built for sustainable value. It does not push teams back into a cost-center posture. Instead, it gives them the visibility to understand how Flow, outcomes, and customer success connect directly to profitability.

In our case, context switching dropped. Developers embedded in single domains became accountable for both flow and customer outcomes. Priorities shifted faster, decisions stayed within teams, and purpose became clearer. When people see how their work creates value, metrics stop being abstract and become insights for improvements; they start to matter.

Context Is Everything

“There is no one-size-fits-all approach to transformation. The true power of frameworks like VSM and POM lies in their flexibility to serve as blueprints rather than rigid rules.”

Adoption succeeds only when frameworks align with an organization’s structure, culture, and leadership context. Models fail not by design but by misapplication. That’s why effective organizations start by seeing their system before changing it.

Value Stream Mapping provides visibility, showing how work moves, where it slows, and how efficiently it reaches customers. Flow Engineering practices, such as Outcome Maps, Current-State Maps, and Dependency Maps, enable leaders to visualize how work, teams, and dependencies interact. These visualizations reveal friction, conflicting priorities, and hidden handoffs that delay the realization of value.

“Visibility creates alignment. Alignment establishes the foundation for improvement.”

The 2024 Project to Product State of the Industry Report confirms that elite organizations don’t just implement frameworks; they adapt them to fit their structure and customer context. That adaptability turns adoption into transformation.

Flow and Realization: The Two Sides of Value

Every delivery system operates in two dimensions:

Flow – how efficiently value moves.

Realization – how effectively that value produces business or customer outcomes.

Most organizations measure one and overlook the other or treat them as separate conversations.

Flow metrics, including Flow Time, Velocity, Efficiency, Distribution, and DORA metrics, reveal system health but not its impact.

Realization metrics, retention, revenue contribution, and time-to-market, show outcomes but not efficiency.

“Flow transforms effort into movement; realization transforms movement into impact.”

The 2024 Project to Product Report found that fewer than 15% of Organizations integrate flow metrics with business outcomes. Yet those that do so outperform their peers on both speed and customer satisfaction.

Measuring Across Layers

Metrics operate across three layers:

• System Layer: Flow & DORA metrics reveal delivery efficiency.

• Team Layer: Developer Experience (DX) and sentiment show team health.

• Business Layer: Realization metrics link work to outcomes.

Connecting these layers turns measurement into meaning and prevents metric theater, reporting what’s easy instead of what matters.

Leadership and Structure: The Missing Link

Even the best frameworks fail without a shift in leadership. Adopting VSM and POM means transitioning from a command-and-control approach to one of clarity, from managing tasks to managing systems.

Delegation and empowerment become strategic levers. Leaders define and communicate outcomes and boundaries; teams own delivery, quality, and learning within them. Guided by data-driven feedback, they experiment and improve.

The best teams treat flow and realization as continuous feedback loops, a living system that evolves with every release.

Governance through transparency replaces micromanagement. Dashboards enable leaders to coach, rather than control, by focusing on flow, bottlenecks, and opportunities. Empowerment is a shared ownership of outcomes.

A mature value-stream culture recognizes that leadership doesn’t disappear, but evolves. The leader’s job is to design the system where great work happens, not be the system itself.

What Comes Next: Amplification Through AI

Organizations often ask, “What’s next?”

The answer is amplification, using technology, data, and AI to accelerate insight and learning.

AI doesn’t change your system; it magnifies it. If your processes are slow, AI exposes that faster. If your system is healthy, it enhances visibility, identifies bottlenecks, and predicts where investment yields the highest return.

The future of AI in VSM is about augmenting human judgment, not replacing it. Intelligent automation links flow metrics to outcomes, detects deviations early, and surfaces recommendations that leaders can act on in real-time. This evolution expands the leader’s role once again, from observer to orchestrator of improvement.

Bridging Technology and Business Value

My ongoing focus is strengthening the connection between technology execution and business outcomes, a lesson shaped by feedback from an executive 360-degree assessment: “You should focus more on business results as a technology leader.”

That insight was right. We transformed from a monolithic architecture and waterfall process into a world-class Agile, microservices-based organization, yet we hadn’t consistently shown how that transformation delivered measurable business results.

To close that gap, we’re developing tools that make value visible:

• Value Stream Templates to connect work with business objectives.

• Initiative & Epic Definitions emphasizing outcomes and dependencies.

• Team-Level OKRs tied to measurable business priorities.

• Knowledge Hub Updates highlighting outcomes over outputs.

The 2024 Project to Product Report found that organizations that consistently link delivery, metrics, and business outcomes outperform their peers in terms of agility, profitability, and retention.

“The answers reveal whether your organization is optimizing activity or enabling value.”

The Real Transformation

When combined, VSM and POM unlock a higher level of capability. They teach leaders to see how work flows, how people collaborate, and how outcomes drive real impact.

When you see work as a flow of value rather than a measure of effort, you stop managing activity and start leading outcomes.

That’s the actual transformation, shifting focus from what we deliver to what difference it makes.

“The time to act is now. Let’s lead purposefully, ensuring our teams deliver meaningful, measurable value in 2026 and beyond.”

Transformation is never solitary; shared understanding across our industry is where alignment begins.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. The 2024 Project to Product State of the Industry Report, Planview, https://info.planview.com/project-to-product-state-of-the-industry-_report_vsm_en_reg.html
  2. Why Value Stream Management and the Product Operating Model Matter, Rethink Your Understanding, https://rethinkyourunderstanding.com/2025/01/why-vsm-and-the-product-operating-model-matter/

Filed Under: Agile, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

Beyond the Beyond Delivery: AI Across the Value Stream

October 11, 2025 by philc

A follow up article and reflection on how AI amplifies the systems it enters, and why clarity in measurement and language defines its true impact.

4 min read

After reading Laura Tacho’s latest article, “What the 2025 DORA Report Means for Your AI Strategy,” published today by DX, I found myself nodding along from start to finish. Her analysis reinforces what many of us have been saying for the past year: AI doesn’t automatically improve your system; it amplifies whatever already exists within it.

If your system is healthy, AI accelerates learning, delivery, and improvement. If it’s fragmented or dysfunctional, AI will only expose that reality faster.

In my earlier and related article, “Beyond Delivery: Realizing AI’s Potential Across the Value Stream,” I explored this same theme, referencing Laura’s previous work and the DX Core Four research to show how AI’s true promise emerges when applied across the entire value stream, not just within delivery. Her new reflections build on that conversation beautifully, grounding it in DORA’s 2025 findings and placing even greater emphasis on what truly determines AI success: measurement, monitoring, and system health.

AI’s True Leverage Is in the System

What stands out in both discussions is that AI amplifies the system it enters.

Healthy systems, with strong engineering practices, small-batch work, solid source control, and active observability, see acceleration. Weak systems, where friction and inconsistency already exist, see those problems amplified.

That’s why measurement and feedback are the new leadership disciplines.

Organizations treating AI as a system-level investment, rather than a tool for individual productivity, are seeing the greatest impact. They aren’t asking “how many developers are using Copilot?” but instead “how is AI helping our teams improve outcomes across the value stream?”

DORA’s latest research validates that shift, focusing less on adoption rates and more on outcomes. It echoes a point Laura made and I emphasized in my own writing: AI’s advantage is proportional to the strength of your engineering system.

Why Clarity Still Matters

While I agree with nearly everything in Laura’s article, one nuance deserves attention, not as a critique, but as context.

DORA, DX Core 4, LinearB, and other Software Engineering Intelligence (SEI) platforms are not Value Stream Management (VSM) platforms. It measures the segment of the delivery lifecycle, create and release. However, true VSM spans the entire lifecycle: from idea to delivery and operation.

This distinction matters because where AI is applied should match where your bottlenecks exist.

If your constraint is upstream, in ideation or backlog management, and you only apply AI within development, you’re optimizing a stage that isn’t the problem.

Think of your value stream as four connected tanks of water: ideation, creation, release, and operation.

If the first tank (ideation) is blocked, making the water move faster in the second (creation) doesn’t improve throughput. You’re just circulating water in your own tank while everything above remains stuck.

That’s why AI should be applied where it can improve the overall flow, across the whole system, not just a single stage.

It’s also where clarity of language matters. Some Software Engineering Intelligence (SEI) platforms, including Laura’s organization, integrate DORA metrics within broader insights and occasionally describe their approach as VSM. From a marketing standpoint, that’s understandable; SEI platforms compete with full-scale VSM platforms, such as Planview Viz, which measure the entire value stream. However, it’s worth remembering that DORA and most SEI metrics represent one vital stage, not the entire system.

On Vendors, Neutrality, and Experience

I have deep respect for Laura and her organization’s work advancing how we measure and improve developer experience. Over the last four years, I’ve also established professional relationships with several of these platform providers, offering feedback and leadership perspectives to their teams as they evolve their products and strategies.

I share this because my perspective is grounded in firsthand experience, research, and conversations across the industry, not because of any endorsement. I’m not paid to promote any vendor. Those who know me are aware that I have my preferences, currently Planview Viz for Value Stream Management, as well as LinearB and the DX Core 4 for Software Engineering Intelligence and developer-experience insights.

Each offers unique value, but I’ve yet to see a single platform deliver a truly complete view across all stages, combining full system-level metrics and team sentiment data. Until that happens, I’ll continue to advocate for clarity of terms and how these solutions market themselves, and measurements that accurately reflect reality.

And to be fair, I haven’t kept up with every vendor’s latest releases, so I encourage anyone exploring these tools to do their own research and choose what best fits their organization’s context and maturity.

Closing Thought

Laura’s article is spot-on in identifying what really drives AI impact: monitoring, measuring, and managing the system it touches.

That’s the same theme at the heart of Beyond Delivery: that AI’s potential isn’t realized through automation alone, but through its ability to illuminate flow, reveal friction, and help teams improve faster than before.

When we describe our systems accurately, we focus on what truly matters, and that’s when AI stops being a tool for speed and becomes an accelerant for value across the entire system.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Tacho, Laura. “What the 2025 DORA Report Means for Your AI Strategy.” DX Newsletter, October 8, 2025.
    Available at: https://newsletter.getdx.com/p/2025-dora-report-means-for-your-ai-strategy
  • Clark, Phil. “Beyond Delivery: Realizing AI’s Potential Across the Value Stream.” Rethink Your Understanding, September 2025.
    Available at: https://rethinkyourunderstanding.com/2025/09/beyond-delivery-realizing-ais-potential-across-the-value-stream/
  • DORA Research Team. “2025 State of AI-Assisted Software Development (DORA Report).” Google Cloud / DORA, September 2025.
    Available at: https://cloud.google.com/devops/state-of-devops

Filed Under: Agile, AI, DevOps, Metrics, Product Delivery, Software Engineering, Value Stream Management

Beyond Delivery: Realizing AI’s Potential Across the Value Stream

September 29, 2025 by philc

Moving beyond AI-assisted delivery to achieve measurable, system-wide impact through value stream visibility and flow metrics.

10 min read

At the 2025 Engineering Leadership Tech Summit, Mik Kersten previewed ideas from his upcoming book, Output to Outcome: An Operating Model for the Age of AI. He reminded us of a truth often overlooked in digital transformation: Agile delivery teams are not the constraint in most cases.

Kersten broke out the software value stream into four phases: Ideate, Create, Release, Operate, and showed how the majority of waste and delay happens outside of coding. One slide in particular resonated with me. Agile teams accounted for just 8% of overall cycle time. The real delays sat at the bookends: 48% in ideation, slowed by funding models, approvals, and reprioritizations; and 44% in release, bogged down by dependencies, technical debt, and manual processes.

This framing raises a critical question: if we only apply AI to coding or delivery automation, are we just accelerating the smallest part of the system while leaving the actual bottlenecks untouched?

AI in the Delivery Stage: Where the Industry Stands

In a recent DX Engineering Enablement podcast, Laura Tacho and her co-hosts discussed the role of AI in enhancing developer productivity. Much of their discussion centered on the Create and Release stages: code review, testing, deployment, and CI/CD automation. Laura made a compelling point about moving beyond “single-player mode”:

“AI is an accelerant best when it’s used at an organizational level, not when we just put a license in the hands of an individual… Platform teams can own a lot of the metaphorical AI headcount and apply it in a horizontal way across the organization.”

Centralizing AI adoption and applying it across delivery produces leverage, rather than leaving individuals to experiment in isolation. But even this framing is still too narrow.

The Missing Piece: AI Adoption Across the Entire Stream

The real opportunity is to treat AI not as a tool for delivery efficiency, but as a partner across the entire value stream. That means embedding AI into every stage and measuring it with system-level visibility, not just delivery dashboards.

This is why I value platforms that integrate tool data across the whole stream, system metrics and visibility dashboards, rather than tools that stop at delivery.

Of course, full-stream visibility platforms are more expensive, and in many organizations, only R&D teams are driving efforts to improve flow. As I’ve argued in past writing on SEI vs. VSM, context matters: sometimes the right starting point is SEI, when delivery is the bottleneck. But when delays span ideation, funding, or release, only a VSM platform can expose and address systemic waste.

AI opportunities across the stream:

  • Ideation (48%) – Accelerate customer research, business case drafting, and approvals; surface queues and wait states in one view.
  • Create (8%) – Apply AI to coding, reviews, and testing, but tie it to system outcomes, not vanity speedups.
  • Release (44%) – Automate compliance, dependency checks, and integration work to reduce handoff delays.
  • Operate – Target AI at KTLO and incident patterns, feeding learnings back into product strategy.

When AI is applied across the whole system (value stream), we can ask a better question: not “How fast can we deploy?” but “How much can we compress idea-to-value?” Moving from 180 days to 90 days or less becomes possible when AI supports marketing, product, design, engineering, release, and support, and when the entire system is measured, not just delivery.

VSM vs. Delivery-Only Tooling

This is where tooling distinctions matter. DX Core 4 and SEI platforms, such as LinearB, focus on delivery (Create and Release), which is valuable but limited to one stage of the system. Planview Viz and other VSM platforms, by contrast, elevate visibility across the entire value stream.

Delivery-only dashboards may show how fast you’re coding or deploying. But Value Stream Management reveals the actual business constraints, often upstream in funding, prioritization, PoCs, and customer research, or downstream in handoffs and release.

Without that lens, AI risks becoming just another tool that speeds up developers without improving the system.

AI as a Force Multiplier in Metrics Platforms

AI embedded directly into metrics platforms can change the game. In a recent Product Thinking podcast, John Cutler observed:

“We talked to a company that’s spending maybe $4 million in staff hours per quarter around just people spending time copying and prepping for all these types of things… All they’re doing is creating a dashboard, pulling together a lot of information, and re-contextualizing it so it looks the same in a meeting. I think that’s just a massive opportunity for AI to be able to help with that kind of stuff.”

This hidden cost of operational overhead is real. Leaders and teams waste countless hours aggregating and reformatting data into slides or dashboards to make it consumable.

Embedding AI into VSM or SEI platforms removes that friction. Instead of duplicating effort, AI can generate dashboards, surface insights, and even facilitate the conversations those dashboards are meant to support.

This is more of a cultural shift than a productivity gain. Less slide-building, more strategy. Less reformatting, more alignment. And metrics conversations that finally scale beyond the few who have time to stitch the story together manually.

The ROI Lens: From Adoption to Efficiency

The ROI of AI adoption is no longer a question of whether to invest; that decision is now a given. As Atlassian’s 2025 AI Collaboration Report shows, daily AI usage has doubled in the past year, and executives overwhelmingly cite efficiency as the top benefit.

The differentiator now is how efficiently you manage AI’s cost, just as the cloud debate shifted from whether to adopt to how well you could optimize spend.

But efficiency cannot be measured by isolated productivity gains. Atlassian found that while many organizations report time savings, only 4% have seen transformational improvements in efficiency, innovation, or work quality.

The companies breaking through embed AI across the system: building connected knowledge bases, enabling AI-powered coordination, and making AI part of every team.

That’s why the ROI lens must be grounded in flow metrics. If AI adoption is working, we should see:

  • Flow time shrink
  • Flow efficiency rises
  • Waste reduction is visible in the stream
  • Flow velocity accelerates (more items delivered at the same or lower cost)
  • Flow distribution rebalance (AI resolving technical debt and reducing escaped defects)
  • Flow load stabilization (AI absorbing repetitive work and signaling overload early)

VSM system-wide platforms make these signals visible, showing whether AI is accelerating the idea-to-value process across the entire stream, not just helping individuals move faster.

Bringing It Full Circle

In recent conversations with a large organization’s CTO, and again with Laura while exploring how DX and Anthropic measure AI, I kept returning to the same point: we already have the metrics to know if AI is making an impact. AI is now just another option or tool in our toolbox, and its effect is reflected in flow metrics, change failure rates, and developer experience feedback.

We are also beginning to adopt DX AI Framework metrics, which are structured around Utilization, Impact, and Cost, aligning with the metrics that companies like Dropbox and Atlassian currently measure. But even as we incorporate these, we continue to lean on system-level flow metrics as the foundation. They are what reveal whether AI adoption is truly improving delivery across the value stream, from ideation to production.

Leadership Lessons from McKinsey and DORA

This perspective also echoes Ruba Borno, VP at AWS, in a recent McKinsey interview on leading through AI disruption. She noted that while AI’s pace of innovation is unprecedented, only 20–30% of proofs of concept reach production. The difference comes from data readiness, security guardrails, leadership-driven change management, and partnerships.

And the proof is tangible: Canva, working with AWS Bedrock, moved from the idea of Canva Code to a launched product in just 12 weeks. That’s precisely the kind of idea-to-operation acceleration we need to measure. It shows that when AI is applied systematically, you don’t just make delivery faster; you also make the entire flow from concept to customer measurably shorter.

The 2025 DORA State of AI-Assisted Software Development Report reinforces this reality. Their cluster analysis revealed that only the top performers, approximately 40% of teams, currently experience AI-enhanced throughput without compromising stability. For the rest, AI often amplifies existing dysfunctions, increasing change failure rates or generating additional waste.

Leadership Implications: What the DORA Findings Mean for You

The 2025 DORA report indicates that only the most mature teams currently benefit from AI-assisted coding. For everyone else, AI mostly amplifies existing problems. What does that mean if you’re leading R&D?

1. Don’t skip adoption, but don’t roll it out unthinkingly.

AI is here to stay, but it’s not a silver bullet. Start small with teams that already have strong engineering practices, and use them to build responsible adoption patterns before scaling.

2. Treat AI as an amplifier of your system.

If your flow is healthy, AI accelerates it. If your flow is dysfunctional, AI makes it worse. Think of it like a turbocharger: great when the engine and brakes are tuned, dangerous when they’re not.

3. Use metrics to know if AI is helping or hurting.

  • Flow time, efficiency, and distribution should improve.
  • DORA’s stability metrics (such as change failure rate) should remain steady or decline.
  • Developer sentiment should show growing confidence, not frustration.

4. Fix bottlenecks in parallel.

AI won’t remove waste; it will expose it faster. Eliminate approval delays, reduce tech debt, and streamline release processes so AI acceleration actually creates value.

5. Value of the message:

The lesson isn’t “don’t adopt AI.” It’s: adopt responsibly, measure outcomes, and strengthen your system so that AI becomes an accelerant, not a liability.

Ruba’s message, reinforced by both McKinsey and DORA, leads to the same conclusion: AI adoption succeeds when it’s measured at the system level, tied to business outcomes, and championed by leadership. Without that visibility, organizations risk accelerating pilots that never translate into value.

Conclusion: Beyond Delivery

The conversation about AI in software delivery is maturing. It’s no longer just about adoption, but about managing costs and system impact. AI must be measured not only by its utilization but also by how it improves flow efficiency, compresses the idea-to-value cycle, and reduces systemic waste.

The organizations that will win in this new era are those that:

  • Embed AI across the entire value stream, not just in delivery.
  • Measure ROI through flow metrics that connect improvements to business outcomes.
  • Manage AI’s cost as carefully as they once managed cloud costs.
  • Lead with visibility, change management, and partnerships to scale adoption.

And critically, successful AI integration requires more than deploying tools. It requires thoughtful measurement, training, and best practices for implementation in software engineering to sustain quality while ensuring that training and strategy are applied consistently across all roles, from product and design to operations and support. Only then can organizations ensure that the promise of acceleration improves outcomes without undermining the collaboration and sustainability that long-term software success depends on.

In short: AI in delivery is helpful, but AI across the value stream is transformational.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  • Atlassian. (2025). How leading companies unlock AI ROI: The AI Collaboration Index. Atlassian Teamwork Lab. Retrieved from https://atlassianblog.wpengine.com/wp-content/uploads/2025/09/atlassian-ai-collaboration-report-2025.pdf
  • Borno, R., & Yee, L. (2025, September). How to lead through the AI disruption. McKinsey & Company, At the Edge Podcast (transcript). Retrieved from https://www.mckinsey.com
  • Cutler, J. (2025, September 23). Product Thinking: Freeing Teams from Operational Overload [Podcast]. Episode 247. Apple Podcasts. https://podcasts.apple.com/us/podcast/product-thinking/id1550800132?i=1000728179156
  • DX, Engineering Enablement Podcast. (2025). Episode excerpt on AI’s role in developer productivity and platform teams. DX. (Quoted in article from Laura Tacho). Episode 90, https://podcasts.apple.com/us/podcast/the-evolving-role-of-devprod-teams-in-the-ai-era/id1619140476?i=1000728563938
  • DX (Developer Experience). (2025). Measuring AI code assistants and agents: The DX AI Measurement Framework™. DX Research, co-authored by Abi Noda and Laura Tacho. Retrieved from https://getdx.com (Image: DX AI Measurement Framework).
  • Kersten, M. (2025). Output to Outcome: An Operating Model for the Age of AI (forthcoming). Presentation at the 2025 Engineering Leadership Tech Summit.
  • Google Cloud & DORA (DevOps Research and Assessment). (2025). 2025 State of AI-Assisted Software Development Report. Retrieved from https://cloud.google.com/devops/state-of-devops

Further Reading

For readers interested in exploring AI ideas further, here are a few related pieces from my earlier writing:

  • AI in Software Delivery: Targeting the System, Not Just the Code
  • AI Is Improving Software Engineering. But It’s Only One Piece of the System
  • Leading Through the AI Hype in R&D
  • Decoding the Metrics Maze: How Platform Marketing Fuels Confusion Between SEI, VSM, and Metrics

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Software Engineering, Value Stream Management

AI in Software Delivery: Targeting the System, Not Just the Code

August 9, 2025 by philc

7 min read

This article is a follow-up to my earlier post, AI Is Improving Software Engineering. But It’s Only One Piece of the System. In that post, I explored how AI is already helping engineering teams work faster and better, but also why those gains can be diminished if the rest of the delivery system lags.

Here, I take a deeper look at that system-wide perspective. Adopting AI is about strengthening the entire system. We need to think about AI not only within specific teams but across the organizational level, ensuring its impact is felt throughout the value stream.

AI has the potential to improve how work flows through every part of our delivery system: product, QA, architecture, platform, and even business functions like sales, marketing, legal, and finance.

If you already have robust delivery metrics, you can pinpoint exactly where AI will have the most impact, focusing its efforts on the actual constraints rather than “speeding up” work at random. But for leaders who don’t yet have a clear set of system metrics and are still under pressure to show AI’s return on investment, I strongly recommend starting with a platform or framework that captures system delivery performance.

In my previous articles, I’ve outlined the benefits of SEI (Software Engineering Intelligence) tools, DORA metrics (debatable), and, ideally, Value Stream Management (VSM) platforms. These solutions measure and visualize delivery performance across the system, tracking indicators like cycle time, throughput, quality, and stability. They help you understand your current performance and also enable you to attribute improvements, whether from AI adoption or other changes, to specific areas of your workflow. Selecting the right solution depends on your organizational context, team maturity, and goals, but the key is having a measurement foundation before you try to quantify AI’s impact.

The Current Backlash and Why We Shouldn’t Overreact

Recent research and commentary have sparked a wave of caution around AI in software engineering.

A controlled trial by METR (2025) found that experienced developers using AI tools on their repositories took 19% longer to complete tasks than without AI, despite believing they were 20% faster. The 2024 DORA report found similar patterns: a 25% increase in AI adoption correlated with a 1.5% drop in delivery throughput and a 7.2% decrease in delivery stability. Developers felt more productive, but the system-level metrics told another story.

Articles like AI Promised Efficiency. Instead, It’s Making Us Work Harder (Afterburnout, n.d.) point to increased cognitive load, context switching, and the need for constant oversight of AI-generated work. These findings have fed a narrative that AI “isn’t working” or is causing burnout.

But from my perspective, this moment is less about AI failing and more about a familiar pattern: new technology initially disrupts before it levels up those who learn to use it well. The early data reflects an adoption phase, not the end state.

Our Teams’ Approach

Our organization is embracing an AI-first culture, driven by senior technology leadership and, additionally, senior engineers who are leading the charge, innovating, experimenting, and mastering the latest tools and LLMs. However, many teams are earlier in their adoption journey and can feel intimidated by these pioneers. In our division, my focus is on encouraging, training, and supporting engineers to adopt AI tools, gain hands-on experience, explore use cases, and identify gaps. The goal isn’t immediate mastery but building the skills and confidence to use these tools effectively over time.

Only after sustained, intentional use, months down the line, will we have an informed, experienced team that can provide meaningful feedback on the actual outcomes of adoption. That’s when we’ll honestly know where AI is moving the needle, and where it isn’t.

How I Respond When Asked “Is AI Working?”

This approach is inspired by Laura Tacho, CTO at DX, and her recent presentation at LeadDev London, How to Cut Through the Hype and Measure AI’s Real Impact (Tacho, 2025). As a leader, when I face the “how effective is AI?” debate, I ground my answer in three points:

1. How are we performing

We measure our system performance with the same Flow Metrics we used before AI: quality, stability, time-to-value, and other delivery health indicators. We document any AI-related changes to the system, tools, or workflows so we can tie changes in metrics back to their potential causes.

2. How AI is helping (or not helping)

We track where AI is making measurable improvements, where it’s neutral, and where it may be introducing new friction. This is about gaining an honest understanding of where AI is adding value and where it needs refinement.

3. What will we do next

Based on that data and team feedback, we adjust. We expand AI use where it’s working, redesign where it’s struggling, and stay disciplined about aligning AI experiments to actual system constraints.

This framework keeps the conversation grounded in facts, not hype, and shows that our AI adoption strategy is deliberate, measurable, and responsive.

What System Are We Optimizing?

When I refer to “the system,” I mean the structure and process by which ideas flow through our organization, become working software, and deliver measurable value to customers and the business.

Using a Value Stream Management and Product Operating Model approach together gives us that view:

  • Value stream: the whole journey of work from ideation to delivery to customer realization, including requirements, design, build, test, deploy, operate, and measure.
  • Product operating model: persistent, cross-functional teams aligned to products that own outcomes across the lifecycle.

Together, these models reveal not just who is doing the work, but how it flows and where the friction is. That’s where AI belongs, improving flow, clarity, quality, alignment, and feedback across the system.

The Mistake Many Are Making

Too many organizations inject AI into the wrong parts of the system, often where the constraint isn’t. Steve Pereira’s It’s time for AI to meet Flow (Pereira, 2025) captures it well: more AI output can mean more AI-supported rework if you’re upstream or downstream of the actual bottleneck.

This is why I believe AI must be tied to flow improvement:

  1. Make the work visible – Map how work moves, using both our existing metrics and AI to visualize queues, wait states, and handoffs.
  2. Identify what’s slowing it down – Use flow metrics like cycle time, WIP, and throughput to find constraints before applying AI.
  3. Align stakeholders – AI can synthesize input from OKRs, roadmaps, and feedback, so we’re solving the right problems.
  4. Prototype solutions quickly – Targeted, small-scale AI experiments validate whether a constraint can be relieved before scaling.

Role-by-Role AI Adoption Across the Value Stream

AI isn’t just for software engineers, it benefits every role on your cross-functional team. Here are just a few examples of how it can make an impact. There are many more ways for each role than listed below.

Product Managers / Owners

  • Generate Product Requirements Documentation
  • Analyze customer, market, and outcome metrics
  • Groom backlogs, draft user stories, and acceptance criteria.
  • Summarize customer feedback and support tickets.
  • Use AI to prepare for refinement and planning.

QA Engineers

  • Generate test cases from acceptance criteria or code diffs.
  • Detect coverage gaps and patterns in flaky tests.
  • Summarize PR changes to focus testing.

Domain Architects

  • Visualize system interactions and generate diagrams.
  • Validate design patterns and translate business rules into architecture.

Platform Teams

  • Generate CI/CD configurations.
  • Enforce architecture and security standards with automation.
  • Identify automation opportunities from delivery metrics.

InfoSec Liaisons

  • Scan commits and pull requests (PRs) for risky changes.
  • Draft compliance evidence from logs and release data.

Don’t Forget the Extended Team

Sales, marketing, legal, and finance all influence the delivery flow. AI can help here, too:

  • Sales: Analyze and generate leads, summarize customer engagements, and highlight trends for PMs.
  • Marketing: Draft launch content from release notes.
  • Legal: Flag risky language, summarize new regulations.
  • Finance: Model ROI of roadmap options, forecast budget impact.

Risk and Resilience

What happens when AI hits limits or becomes unavailable? Inference isn’t free; costs will rise, subsidies will fade, and usage may be capped. Do you have fallback workflows, maintain manual expertise, and measure AI’s ROI beyond activity? Another reason for us to gain experience with these tools is to improve our efficiency and understand usage patterns.

The Opportunity

We already have the data to see how our system performs. The real opportunity is to aim AI at the constraints those metrics reveal, removing friction, aligning teams, and improving decision-making. If we take the time to learn the tools now, we’ll be ready to use them where they matter most.

What Now?

We already have the metrics to see how our system performs. The real opportunity is to apply AI purposefully across the full lifecycle, from ideation and design, through development, testing, deployment, and into operations and business alignment. By directing AI toward the right constraints, we eliminate friction, unify our teams around clear metrics, and elevate decision-making at every step.

Yes, AI adoption is a learning journey. We’ll stumble, experiment, and iterate, but with intention, measurement, and collaboration, we can turn scattered experiments into a sustained competitive advantage. AI adoption is about transforming or improving the system itself.

AI isn’t failing, it’s maturing. We’re on the rise of the adoption curve. Our challenge and opportunity is to build the muscle and culture to deploy AI across the lifecycle, turning today’s experiments into tomorrow’s engineered advantage.

For anyone still hesitant, know this: AI isn’t going away. Whether it slows us down or speeds us up, we must learn to use it well, or we risk being left behind. Let’s learn. Let’s measure. Let’s apply AI where it’s most relevant and learn to understand its current benefits and limitations. There’s no going back, only forward.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

Afterburnout. (n.d.). AI promised efficiency. Instead, it’s making us work harder. Afterburnout. https://afterburnout.co/p/ai-promised-to-make-us-more-efficient

Clark, P. (2025, July). AI is improving software engineering. But it’s only one piece of the system. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/07/ai-is-improving-software-engineering-but-its-only-one-piece-of-the-system/

METR. (2025, July 10). Measuring the impact of early-2025 AI on experienced open-source developer productivity. METR. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Pereira, S. (2025, August 8). It’s time for AI to meet flow: Flow engineering for AI. Steve Pereira. https://stevep.ca/its-time-for-ai-to-meet-flow/

State of DevOps Research Program. (2024). 2024 DORA report. Google Cloud / DORA. (Direct URL to the report as applicable)

Tacho, L. (2025, June). How to cut through the hype and measure AI’s real impact. Presentation at LeadDev London.  https://youtu.be/qZv0YOoRLmg?si=aMes-VWyct_DEWz0

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

AI Is Improving Software Engineering. But It’s Only One Piece of the System

July 31, 2025 by philc

5 min read

A follow-up to my last post Leading Through the AI Hype in R&D, this piece explores how strong AI adoption still needs system thinking, responsibility, and better leadership focus.

Leaders are moving fast to adopt AI in engineering. The urgency is real, and the pressure is growing. But many are chasing the wrong kind of improvement, or rather, focusing too narrowly.

AI is transforming software engineering, but it addresses only one part of a much larger system. Speeding up code creation doesn’t solve deeper issues like unclear requirements, poor architecture, or slow feedback loops, and in some cases, it can amplify dysfunction when the system itself is flawed.

Engineers remain fully responsible for what they ship, regardless of how the code is written. The real opportunity is to increase team capacity and deliver value faster, not to reduce cost or inflate output metrics.

The bigger risk lies in how senior leaders respond to the hype. When buzzwords instead of measurable outcomes drive expectations, focus shifts to the wrong problems. AI is a powerful tool, but progress requires leadership that stays grounded, focuses on system-wide improvement, and prioritizes accountability over appearances.

A team member recently shared Writing Code Was Never the Bottleneck by Ordep. It cut through the noise. Speeding up code writing doesn’t solve the deeper issues in software delivery. That article echoed what I’ve written and experienced myself. AI helps, but not where many think it does, “currently”.

This post builds on my earlier post, Leading Through the AI Hype in R&D That post challenged hype-driven expectations. This one continues the conversation by focusing on responsibility, measurement, and real system outcomes.

Code Implementation Is Rarely the Bottleneck

Tools like Copilot, Claude Code, Cursor, Devon, … can help developers write code faster. But that’s not where most time is lost.

Delays come from vague requirements, missing context, architecture problems, slow reviews, and late feedback. Speeding up code generation in that environment doesn’t accelerate delivery. It accelerates dysfunction.

I Use AI in My Work

I’ve used agentic AI and tools to implement code, write services, and improve documentation. It’s productive. But it takes consistent reviews. I’ve paused, edited, and rewritten plenty of AI-generated output.

That’s why I support adoption. I created a tutorial to help engineers in my division learn to use AI effectively. It saves time. It adds value. But it’s not automatic. You still need structure, process, and alignment.

Engineers Must Own Impact, Not Just Output

Using AI doesn’t remove responsibility. Engineers are still accountable for what their code does once it runs.

They must monitor quality, performance, cost, and user impact. AI can generate a function. But if that function causes a spike in memory usage or breaks under scale, someone has to own that.

I covered this in Responsible Engineering: Beyond the Code – Owning the Impact. AI makes output faster. That makes responsibility more critical, not less. Code volume isn’t the goal. Ownership is.

Code Is One Step in a Larger System

Software delivery spans more than development. It includes discovery, planning, testing, release, and support. AI helps one step. But problems often live elsewhere.

If your system is broken before and after the code is written, AI won’t help. You need to fix flow, clarify ownership, and reduce friction across the whole value stream.

Small Teams Increase Risk Without System Support

Some leaders believe AI allows smaller teams to do more. That’s only true if the system around them improves too.

Smaller teams carry more scope. Cognitive load increases. Knowledge becomes harder to spread. Burnout rises.

Support pressure also grows. The same few experts get pulled into production issues. AI doesn’t take the call. It doesn’t debug or triage. That load falls on people already stretched thin.

When someone leaves, the risk is bigger. The team becomes fragile. Response times are slow. Delivery slips.

The Hard Part Is Not Writing the Code

One of my engineers said it well. Writing code is the easy part. The hard part is designing systems, maintaining quality, onboarding new people, and supporting the product in production.

AI helps with speed. It doesn’t build understanding.

AI Is a Tool. Not a Strategy

I support using AI. I’ve adopted it in my work and encourage others to do the same. But AI is a tool. It’s not a replacement for thinking.

Use it to reduce toil. Use it to improve iteration speed. But don’t treat it as a strategy. Don’t expect it to replace engineering judgment or improve systems on its own.

Some leaders see AI as a path to reduce headcount. That’s short-sighted. AI can increase team capacity. It can help deliver more features, faster. That can drive growth, expand market share, and increase revenue. The opportunity is to create more value, not simply lower cost.

The Metrics You Show Matter

Senior leaders face pressure to show results. Investors want proof that AI investments deliver value. That’s fair.

The mistake is reaching for the wrong metrics. Commit volume, pull requests, and code completions are easy to inflate with AI. They don’t reflect real outcomes.

This is where hype causes harm. Leaders start chasing numbers that match the story instead of measuring what matters. That weakens trust and obscures the impact.

If AI is helping, you’ll see a better flow. Fewer delays. Faster recovery. More predictable outcomes. If you’re not measuring those things, you’re missing the point.

AI Is No Longer Optional

AI adoption in software development is no longer a differentiator. It’s the new baseline.

Teams that resist it will fall behind. No investor would approve a team using hammers when nail guns are available. The expectation is clear. Adopt modern tools. Deliver better outcomes. Own the results.

What to Focus On

If you lead AI adoption, focus on the system, not the noise.

  • Improve how work moves across teams
  • Reduce delays between steps
  • Align teams on purpose and context
  • Use AI to support engineers, not replace them
  • Measure success with delivery metrics, not volume metrics
  • Expect engineers to own what they ship, with or without AI

You don’t need more code. You need better outcomes. AI can help, but only if the system is healthy and the people are accountable.

The hype will keep evolving. So will the tools. But your responsibility is clear. Focus on what’s real, what’s working, and what delivers value today.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Clark, Phil. Leading Through the AI Hype in R&D. Rethink Your Understanding. July 2025. Available at: https://rethinkyourunderstanding.com/2025/07/leading-through-the-ai-hype-in-rd
  2. Ordep. Writing Code Was Never the Bottleneck. Available at: https://ordep.dev/posts/writing-code-was-never-the-bottleneck
  3. Clark, Phil. Responsible Engineering: Beyond the Code – Owning the Impact. Rethink Your Understanding. March 2025. Available at: https://rethinkyourunderstanding.com/2025/03/responsible-engineering-beyond-the-code-owning-the-impact

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Metrics, Product Delivery, Software Engineering

Leading Through the AI Hype in R&D

July 27, 2025 by philc

7 min read

Note: AI is evolving rapidly, transforming workflows faster than expected. Most of us can’t predict how quickly or to what level AI will change our teams or workflow. My focus for this post is on the current state, pace of change, and the reality vs hype at the enterprise level. I promote the adoption of AI and encourage every team member to embrace it.

I’ve spent the past few weeks deeply immersed in “vibe coding” and experimenting with agentic AI tools during my nights and weekends, learning how specialized agents can orchestrate like real product teams when given proper context and structure. But in my day job as a senior technology leader, the tone shifts. I’ve found myself in increasingly chaotic meetings with senior leaders, chief technology officers, chief product officers, and engineering VPs, all trying to out-expert each other on the transformative power of AI on product and development (R&D) teams.

The energy often feels like a pitch room, not a boardroom. Someone declares Agile obsolete. Another suggests we can replace six engineers with AI agents. A few toss around claims of “30× productivity.” I listen, sometimes fascinated, often frustrated, at how quickly the conversation jumps to conclusions without asking the right questions. More troubling, many of these executives are under real pressure from investors and ownership to show ROI. If $1M is spent on AI adoption, how do we justify the return? What metrics will we use to report back?

Hearing the Hype (and Feeling the Exhaustion)

One executive confidently declared, “Agile and Lean are dead,” citing the rise of autonomous AI agents that can plan, code, test, and deploy without human guidance. His opinion echoed a recent blog post, Agile Is Dead: Long Live Agentic Development, which criticized Agile rituals like daily stand-ups and sprints as outdated and encouraged teams to let agents take over the workflow¹. Meanwhile, agile coaches argue that bad Agile, not Agile itself, is the real problem, and that AI can strengthen Agile if applied thoughtfully.

The hype escalates when someone shares stories of high-output engineering from one of the senior developers, keeping up with AI capabilities: 70 AI-assisted commits in a single night, barely touching the keyboard. Another proposes shrinking an 8-person team to just two engineers, one writing prompts and one overseeing quality, as the AI agents do the rest. These stories are becoming increasingly common, especially as research suggests that AI can dramatically reduce the number of engineers needed for many projects². Elad Gil even claimed most engineering teams could shrink by 5×–10×.

But these same reports caution against drawing premature conclusions. They warn that while AI enables productivity gains, smaller teams risk creating knowledge silos, reduced quality, and overloading the remaining developers². Other sources echo this risk: Software Engineering Intelligence (SEI) tools have flagged increased fragility and reduced clarity in AI-generated code when review practices and documentation are lacking³.

What If We’re Already Measuring the Right Things?

While executives debate whether Agile is dead, I find myself thinking: we already have the tools to measure AI’s impact, we just need to use them.

In my organization’s division, we’ve spent years developing a software delivery metrics strategy centered on Value Stream Management, Flow Metrics, and team sentiment. These metrics already show how work flows through the system, from idea to implementation to value. They include:

  • Flow metrics like distribution, throughput, time, efficiency, and load
  • Quality indicators like change failure rate and security defect rate
  • Sentiment and engagement data from team surveys
  • Outcome-oriented metrics like anticipated outcomes and goal (OKR) alignment

Recently, I aligned our Flow Metrics with the DX Core 4 Framework⁴ matrix, organizing them into four key categories: speed, effectiveness, quality, and impact. We made these visual and accessible, using this clear chart to show how each metric relates to delivery health. These metrics don’t assume Agile is obsolete or that AI is the solution. They track how effectively our teams are delivering value.

So when senior leaders asked, “How will we measure AI’s impact?” I reminded them, we already are. If AI helps us move faster, we’ll see it in flow time. If it increases capacity, we’ll see it in throughput (flow velocity). If it maintains or improves quality, our defect rates and sentiment scores will reflect that. The same value stream lens that shows us where work gets stuck will also reveal whether AI helps us unstick it.

Building on Existing Metrics: The AI Measurement Framework

Instead of creating an entirely new system, I layered an existing AI Measurement Framework on top of our existing performance metrics⁵. This format includes three categories:

  1. Utilization:
    • % of AI-generated code
    • % of developers using AI tools
    • Frequency of AI-agent use per task
  2. Impact:
    • Changes in flow metrics (faster cycle time)
    • Developer satisfaction or frustration
    • Delivered value per team or engineer
  3. Cost:
    • Time saved vs. licensing and premium token cost
    • Net benefit of AI subscriptions or infrastructure

This approach answers the following questions: Are developers using AI tools? Does that usage make a measurable difference? And does the difference justify the investment?

In a recent leadership meeting, someone asked, “What percentage of our engineers are using AI to check in code?” That’s an adoption metric, not a performance one. Others have asked whether we can measure AI-generated commits per engineer to report to the board. While technically feasible with specific developer tools, this approach risks reinforcing vanity metrics that prioritize motion over value. Without impact and ROI metrics, adoption alone can lead to gaming behavior, and teams might flood the system with low-value tasks to appear “AI productive.” What matters is whether AI is helping us delivery better, faster, and smarter.

I also recommend avoiding vanity metrics, such as lines of code or commits. These often mislead leaders into equating motion with value. Many vendors boast “AI wrote 50% of our code,” but as developer-experience researcher Laura Tacho explains, this usually counts accepted suggestions, not whether the code was modified, deleted, or even deployed.⁵ We must stay focused on outcomes, not outputs.

The Risk of Turning AI into a Headcount Strategy

One of the more concerning trends I’m seeing is the concept of “headcount conversion,” which involves reducing team size and utilizing the savings to fund enterprise AI licenses. If seven people can be replaced by two and an AI license, along with a premium token budget, some executives argue, then AI “pays for itself.” However, this assumes that AI can truly replace human capability and that the work will maintain its quality, context, and business value.

That might be true for narrow, repeatable tasks, or small organizations or startups struggling with costs and revenue. But it’s dangerous to generalize. AI doesn’t hold tribal knowledge, coach junior teammates, or understand long-term trade-offs. It’s not responsible for cultural dynamics, systemic thinking, or ethical decisions.

Instead of shrinking teams, we should consider expanding capacity. AI can help us do more with the same people. Developer productivity research indicates that engineers typically reinvest AI-enabled time savings into refactoring, enhancing test coverage, and implementing cross-team improvements², which compounds over time into stronger, more resilient software.

Slowing Down to Go Fast

Leaving those leadership meetings, I felt a mix of energy and exhaustion. Many people wanted to appear intelligent, but few were asking thoughtful questions. We were racing toward solutions without clarifying what problem we were solving or how we’d measure success.

So here’s my suggestion: Let’s slow down. Let’s agree on how we’ll track the impact of AI investments. Let’s integrate those measurements into systems we already trust. And let’s stop treating AI as a replacement for frameworks that still work; instead, let’s use it as a powerful tool that helps us deliver better, faster, and with more intention.

AI isn’t a framework. It’s an accelerator. And like any accelerator, it’s only valuable if we’re steering in the right direction.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Leschorn, J. (2025, May 29). Agile Is Dead: Long Live Agentic Development. Superwise. https://superwise.ai/blog/agile-is-dead-long-live-agentic-development/
  2. Ameenza, A. (2025, April 15). The New Minimum Viable Team: How AI Is Shrinking Software Development Teams. https://anshadameenza.com/blog/technology/ai-small-teams-software-development-revolution/
  3. Circei, A. (2025, March 13). Measuring AI in Engineering: What Leaders Need to Know About Productivity, Risk and ROI. Waydev. https://waydev.co/ai-in-engineering-productivity-risk-roi/
  4. Saunders, M. (2025, January 6). DX Unveils New Framework for Measuring Developer Productivity. InfoQ. https://www.infoq.com/news/2025/01/dx-core-4-framework/
  5. GetDX. (2025). Measuring AI Code Assistants and Agents. DX Research. https://getdx.com/research/measuring-ai-code-assistants-and-agents/

Filed Under: Agile, AI, Delivering Value, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact