• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • AI
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact

Software Engineering

Staying Was the Hard Move

February 28, 2026 by philc

7 min read

I’m at a reflection point in my career. As I consider my next steps and explore new opportunities, executive recruiters often pause at one line on my resume.

Thirteen years at Parchment.

Sometimes the question is curious. Sometimes it is cautious.

“You were there for thirteen years. Why?”

I get it.

In tech, long tenure can be mistaken for comfort. But that single line compresses a decade of reinvention. If you want more detail on how my scope evolved over that time, I wrote a previous companion piece, “So, What Does a VP of Software Engineering Do?”1

I did not spend thirteen years doing the same job at the same company.

I spent thirteen years inside a company that kept changing its scale, constraints, expectations, and the operating model required to win.

When I joined, Parchment wasn’t yet the dominant product in its market. We had a strong mission and real traction, but we were still earning broad market awareness and trust. By the time we exited, Parchment had become the category leader. That arc alone is rare to live through, and even rarer to help shape.

Most of my work lived in the hard middle of transformation: modernizing while the business is still running, customers still depend on the platform, and the organization cannot stop shipping just because you are improving the system.

What changed

Over those years, the technology and practices changed dramatically.

We moved from an on-prem mindset to cloud architecture, modernized from a monolithic structure to a service-based architecture, and shifted delivery practices from waterfall to Agile and flow-based execution with stronger automation and quality discipline.

In the final chapter, we also began integrating AI into engineering, moving from ad hoc experimentation toward more responsible, secure usage patterns and learning loops.

One of my goals through that journey was to make Parchment’s transformation referenceable, especially in an industry where many leaders talk about transformation while also trading stories about Agile failure.

We had real challenges, but we were building something that worked: a modern delivery system that scaled without sacrificing reliability.

Over time, we went from two major releases a year to a high-velocity delivery engine measured in several daily deployments and thousands of production changes annually, enabled by automation and operational discipline.

Personally, it meant going from an early-stage business when I joined to a company doing roughly $ 100 M in revenue by the time of the Instructure acquisition. Instructure publicly stated that Parchment was expected to contribute roughly $115M in revenue in 2024, while the announced transaction value was approximately $835M.2

Why I stayed

And that is also one of the real reasons I stayed.

Very few leaders get the opportunity to live inside a hypergrowth company long enough to help build it from early-stage reality to a high-scale, high-valuation outcome. Even fewer get to do it while the company repeatedly reinvents its technology and operating model.

That kind of experience is worth far more than a salary. It is a form of leadership education you cannot fast-track, and many people simply do not get the chance to see it through.

In high-growth environments, many capable leaders do not make it to later chapters, not because they lack talent, but because the company changes faster than roles and expectations can keep pace.

One more point matters: none of this happens without elite talent. I had the privilege of working alongside some of the best technologists and leaders I have ever worked with, people who could think, innovate, and execute under pressure while keeping quality and customers at the center.

The leadership lessons

My own trajectory mirrored the company’s evolution. I joined as a director leading a small team, expanded to lead all of engineering, grew into a Senior Director role, and in 2019, I stepped into the Vice President of Engineering role.

The early years were more tactical and closer to the work. The later years were heavily strategic, with a broader operating system to build and sustain.

Early on, the critical lesson was how to modernize without breaking trust. I call it the Legacy Mirror. Legacy systems are not just old code. They represent customer trust, business rules, and constraints that fund the next chapter. So instead of betting the company on a rewrite, we modernized in safe slices while keeping the platform stable for customers.

Between roughly 2014 and 2019, we had to find our footing. The business was growing, but the technology division was changing underneath it. Leadership changed. Talent changed. Structures evolved.

A big part of the work was building a vision-forward leadership team, clarifying ownership, and stabilizing the operating model enough to compound later.

As the stakes rose, my role evolved from executing tactics to executive operating. Multi-year strategy. Translating engineering decisions into outcomes. Operating under pressure without letting urgency turn into chaos.

One moment that still sticks with me was the demand to measure individual engineer productivity, such as sales or support. I understood the intent, but I pushed back hard. Software engineering is knowledge work with unknown unknowns.

Individual metrics like lines of code, commits, or tickets closed are misleading and create dysfunction. People optimize their personal scoreboard instead of helping the team, collaboration drops, and throughput often goes down.

Teams deliver software, not individuals. So I reframed the conversation around team flow, quality, and outcomes.

You know the cost of a team. Team and flow metrics tell you whether the investment is delivering predictably, stability, and customer value. Individual issues belong in coaching and accountability at the manager level, not system-wide individual scorekeeping.

As expectations grew, I also learned how to speak the language of valuation without losing the plot.

In many environments, EBITDA becomes a scorecard because it forces explicit tradeoffs and a cadence that reduces surprises. That changes how roadmaps get justified. Work needs to map to growth, retention protection, risk reduction, and margin improvement through lower cost-to-serve.

Depending on the stage of the investment and the time box for outcomes, the ratios and targets shift, and the tension between capital for growth and operational spend tightens or loosens. That directly impacts what gets funded, what gets deferred, and how quickly technical debt accumulates.

In that environment, technical hygiene becomes one of the hardest leadership decisions. It is easy to fund features. It is harder to fund refactoring, dependency upgrades, test reliability, and platform maintenance when EBITDA is tight. But ignoring hygiene creates compounding drag: slower delivery, higher incident load, and more effort spent keeping the system upright.

The only way to protect it in a time-boxed, target-driven environment is to translate hygiene into a business case: reduced cost-to-serve, fewer incidents, faster delivery, and avoided risk that would otherwise show up as churn, support cost, or missed commitments.

At the same time, you still have to keep the human system healthy. Meeting efficiency demands while sustaining engagement is not optional.

In the later years, a meaningful part of the strategy was building the people system: career paths, performance reviews, coaching expectations, and clearer growth signals that made it easier to attract and keep top talent.

We did not always pay the highest in the industry, but we maintained strong retention of top performers, and our division consistently carried some of the highest eNPS scores in the organization.

But none of it matters if customers are not winning.

Putting the customer first is not a slogan. Transformation only counts if customers benefit while you are changing the organization. If quality drops, satisfaction drops. If you start losing customers, you cannot fund the next wave of improvement. The hard middle is learning how to change without breaking trust.

Another major part of the experience was M&A. I participated in technical diligence across multiple acquisitions and then helped integrate the teams and technologies afterward. Integration is where operating models get tested. Can you absorb change without breaking delivery, quality, or culture?

What I carry forward

The last point is this: digital transformation takes time, and it is never done. Modernizing architecture, delivery practices, team design, and culture is not a one-year project. It is reinforcement, iteration, and rebuilding what breaks at the next scale.

The work will continue after my departure.

We built an operating system designed to survive leadership transitions, as long as the culture and talent remain strong. The next chapter is theirs to write, and they have the ingredients that matter: strong managers, durable practices, and a well-performing delivery structure. With that foundation in place, they can keep improving visibility into flow and outcomes and keep integrating AI in responsible, practical ways.

So the real reason I stayed is simple: it was compounding.

I contributed to shaping the strategy through major growth, shifting market demands, and changing stakeholder pressures, then live with the consequences and lead through the hard middle. That combination is rare, demanding, and deeply rewarding.

And my core belief coming out of it is simple: the hardest thing to build is not software. It is a system and culture that can absorb change and still deliver at today’s pace.

Author’s Note: The experiences referenced in this article reflect lessons learned across multiple organizations. Certain identifying details have been generalized to preserve confidentiality.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Phil Clark, “So, What Does a VP of Software Engineering Do?” Rethink Your Understanding, August 21, 2025, https://rethinkyourunderstanding.com/2025/08/so-what-does-a-vp-of-engineering-do/
  2. Instructure Holdings, Inc., “Instructure Signs Definitive Agreement to Acquire Parchment, the World’s Largest Academic Credential Management Platform and Network,” press release, October 30, 2023, accessed February 28, 2026, https://www.instructure.com/press-release/instructure-signs-definitive-agreement-acquire-parchment-worlds-largest-academic

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

Agile Isn’t Dead and AI Isn’t Killing It Either

January 24, 2026 by philc

AI Is Rebundling Roles, Shrinking Some Teams, and Raising the Bar for Responsible Delivery

9 min read

My first article for 2026. I’ve been back in my software roots, weeks of looping with Geoffrey Huntely’s Ralph Wiggum, visiting Steve Yegee’s Gastown, and swapping my earlier AI requirements and repo/tasking workflows for tighter, spec-first execution: GSD (Getting Sh*t Done repo) style, with planning modes that actually keep pace.

As much fun as I have been having implementing code, this article is about leadership and software delivery, not a new AI tool. It was sparked by a headline I’ve seen so many times I can almost predict it: “moving away from Agile,” “Agile is obsolete,” “Agile is dead.”

This time it was a YouTube title from a major consulting firm: “Moving away from Agile: What’s Next” (McKinsey). I wasn’t surprised, consulting narratives have a way of “ending” whatever you’re doing to make room for the next wave of services. I’m not here to debate the video. I’m here to challenge the pattern behind that headline, because it keeps coming back, and now it’s being repackaged as an AI-era conclusion.

I keep seeing “Agile is dead” headlines, now repackaged for the AI era. My take: AI isn’t killing Agile. AI is illuminating constraints that were already in the value stream.

If coding gets faster and lead time doesn’t improve, the bottleneck was never engineering output. It was prioritization, dependencies, validation, operability, and decision latency.

That’s the problem with the “Agile is dead” narrative: it confuses a delivery wrapper with a business capability.

Agility is not a sprint calendar, a Jira workflow, or a job title. Agility is a capability: the organizational skill to sense change, make decisions, and deliver value quickly enough to learn and adapt before the market moves again. Put prototypes in customers’ hands sooner. Shorten the time between “we think” and “we know.” Reduce the cost of being wrong. That capability is a competitive requirement in modern software businesses, not a trend we can retire.

In the post, I outline the load-bearing responsibilities that never go away, why roles will rebundle as teams shrink, and why Value Stream Management (VSM) and flow metrics matter more as AI increases delivery capacity. I’m genuinely curious about where others see the constraint shift as AI adoption grows.

In November, I wrote When AI Isn’t Enough to make a simple point: AI accelerates output, but it doesn’t replace fundamentals, judgment, or accountability. This article is a follow-up to that argument, focusing on the delivery operating model. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

What is changing, and changing fast, is how teams cover the work. AI compresses execution time, reshapes roles, and makes low-value ceremony impossible to defend. But it does not delete the responsibilities required to deliver software safely in the real world.

So no, Agile isn’t dead. Maybe what’s dying is Agile theater.

The debate is mislabeled

When people say “Agile is dead,” they’re often reacting to dysfunction that deserves to die:

  • Standups that are status meetings
  • Backlogs that are graveyards
  • Sprint plans exist to create the appearance of control
  • Story points treated like productivity
  • “Agile transformations” where leadership behavior never changed

If that’s your lived experience, the conclusion feels tempting: the system is heavy and slow, and AI just made the contrast painful.

But that doesn’t mean agility is obsolete. It means your organization was using a process to simulate control.

Agility was never about the ceremonies. Agility is the ability to learn fast under uncertainty. Scrum and Kanban are just different ways to manage that learning loop. AI doesn’t remove the need to steer. It raises the stakes on steering because the engine just got bigger.

Team size can shrink. Responsibility surface area does not

AI is making smaller, stream-aligned teams more feasible in some contexts. You can feel it in the language: “builders” is becoming a popular label precisely because it implies broader ownership, people who can take an idea and move it forward end-to-end with help from tools and agents.

But here’s the part leaders keep getting wrong:
Shrinking the team does not shrink the work that must be covered to deliver and operate software responsibly.

Roles can be consolidated. Responsibilities do not disappear.

AI is collapsing traditional role boundaries within cross-functional teams.

Product managers can now use AI to do meaningful slices of work that used to require separate specialists: synthesize customer feedback at scale, interrogate trends in quantitative data, draft PRDs and acceptance criteria, and produce “good enough” prototypes that accelerate discovery and alignment.

On the delivery side, engineers are increasingly being pulled both upstream and downstream, tightening requirements, exposing edge cases, and improving the spec-to-task chain, while also generating test ideas, acceptance scenarios, and risk-based coverage faster.

This compresses cycle time and rebundles work into fewer hands, but the obligations don’t change: decisions still need evidence and judgment, and shipped changes still must be secure, validated, and operable. AI accelerates artifact creation, it doesn’t shift accountability when those artifacts are wrong.

If you reduce staffing without deliberately reallocating responsibilities, you don’t get a faster team. You get a fragile one that ships wrong faster.

This is the core misunderstanding in the “AI killed Agile” narrative. AI can take on more of the production of work. drafting, synthesizing, generating, and executing. It cannot take on accountability. And it absolutely cannot eliminate the need for clear ownership of the full delivery lifecycle.

The delivery “load-bearing system” that never goes away

No matter what toolchain you use, AI agents, copilots, code generators, a mature product team still has to cover the same end-to-end responsibility surface area across the value stream. AI can accelerate pieces of it, but it doesn’t delete the categories.

It is like the load-bearing structure of a building. You can renovate the interior all you want, swap tools, shrink teams, rebundle roles, automate entire phases, but you don’t get to remove the beams and call it innovation. If you take out the load-bearing parts, the building might look fine for a moment, right up until you add speed, scale, and real customer demand. Then it fails in expensive, public ways.

AI changes the finishing work and the pace of construction. It doesn’t change what’s structurally required for software delivery to hold up under pressure.

You still need outcome clarity: what problem you’re solving, for whom, and what success means.

You still need discovery and validation: evidence, constraints, and signals that the work is worth doing.

You still need work design: thin slicing, sequencing, WIP discipline, and dependency management.

You still need engineering coherence: architecture, contracts, data correctness, security, and privacy-by-design, tradeoffs that hold under change.

You still need verification and resilience: automated tests, performance and reliability checks, security validation, and confidence in recovery.

You still need delivery and operations: CI/CD, safe rollouts, observability, incident readiness, and cost hygiene.

And you still need learning loops: feedback into priorities, retrospectives with teeth, continuous improvement grounded in bottlenecks rather than opinion.

Call it Scrum, call it Kanban, call it flow. This surface area of responsibility is the reality of software delivery. The framework doesn’t change the reality. It changes how you manage it.

What AI actually changes: distribution, not elimination

AI is changing delivery systems in a few predictable ways.

First, it reduces the cost of producing executable clarity. PRDs, briefs, acceptance criteria, architecture options, test cases, runbooks, and documentation can be drafted quickly. That doesn’t remove the need for these artifacts. It changes who can draft them and how fast teams can iterate.

Second, it makes verification loops cheaper and more continuous. This is where the conversation should be. AI does not make quality automatic. It makes quality automation easier if you design for it. The winning pattern isn’t “AI wrote it, ship it.” The winning pattern is “AI drafted it, then the system verified it repeatedly until it earned release confidence.”

Accountability doesn’t move to the model. It stays with the team.

Third, it moves the bottleneck upstream. When execution gets cheap, delays show up where they have always lived, but were easier to ignore when coding was slow:

  • Unclear priorities
  • Slow decision-making
  • Messy dependency networks
  • Environment and access friction
  • Data quality and migration risk
  • Compliance and governance
  • Weak observability
  • Unclear ownership

AI makes building faster. It makes those problems louder.

So if your end-to-end lead time doesn’t improve after “AI productivity gains,” don’t assume you outgrew Agile. You just discovered that engineering output was never your constraint. Your value stream was.

One-pizza teams still need full coverage, so roles rebundle

This is where AI is forcing the real change, and it’s also where the “Agile is dead” headline is most misleading.

In larger teams, you can afford specialists. In smaller teams, the same responsibilities exist, but fewer people are responsible for them. That forces rebundling, and it demands clearer ownership.

Even the smallest stream-aligned team needs a few capability anchors, whether those anchors are full-time roles or shared hats:

  • An outcome anchor who protects clarity and success measures
  • A technical anchor who owns coherence, integration risk, and tradeoffs
  • A quality anchor who owns the verification strategy and release confidence
  • A flow anchor or delivery manager who owns WIP discipline, bottleneck visibility, and learning loops
  • An operability anchor who owns SLOs, observability, and incident readiness

In a two-pizza team, these might be separate people. In a one-pizza team, one person may cover multiple anchors, with AI agents often taking on more of the drafting, research, and execution within each area. But the anchors still need named ownership. Otherwise, the responsibilities become “everyone’s job,” which quickly turns into “no one’s job.”

This is also where the “builder” identity can go right or wrong. “Builder” can mean end-to-end ownership and tighter loops. Or it can become a euphemism for “we removed roles and hoped the work disappeared.”

With AI, the work doesn’t disappear. It redistributes.

Scrum and Kanban are not obsolete (they are context tools)

A lot of “Agile is dead” takes quietly translate to: “timeboxes feel slow, therefore Scrum is obsolete.” That’s not how mature teams think about frameworks. Mature teams choose mechanisms based on context.

Scrum is useful when a team needs a forcing function for planning cadence, stakeholder inspection points, and a regular rhythm of alignment, especially while decision rights and trust are still maturing.

Flow-based systems become more attractive when deployments are continuous, work items are consistently small, WIP limits are respected, and dependencies are visible and actively managed.

AI nudges many teams toward flow because small batch size and fast verification become even more powerful. But “nudges” is not “replaces.” What AI really kills is ceremony without outcomes.

VSM matters more as AI adoption rises

Here’s the test I keep coming back to: If AI speeds up coding and your end-to-end lead time stays the same, your constraint is not engineering output. Your constraint is the value stream.

That’s why Value Stream Management and product operating models matter more, not less, in an AI-shaped world. You need visibility into where work actually waits. You need clarity on decision rights. You need an operating system that can absorb higher delivery capacity without increasing rework and production risk.

AI is an accelerator. The product operating model is the steering and the guardrails. If steering and guardrails are weak, AI doesn’t create agility, but creates faster confusion.

That’s why, in an AI adoption wave, VSM and the product operating model become non-negotiable: it converts raw delivery capacity into aligned outcomes through visibility, ownership, decision rights, and investment boundaries.

AI can starve your teams upstream if discovery and prioritization can’t keep up.

AI can jam your teams downstream if validation and operational readiness can’t keep up.

When the loops get out of sync, speed doesn’t feel like acceleration. It feels like chaos.

What to keep, what to drop, what to add

If you want a more useful conversation than “after Agile,” try “after Agile theater.” Keep what preserves learning and reduces risk. Drop what exists to create the appearance of control.

Add what makes teams AI agent-ready without surrendering judgment: engineered clarity, continuous verification loops, guardrails by design, and flow metrics that expose constraints across the full value stream, not just inside engineering.

The claim I’ll keep making in 2026

Agile isn’t dead unless someone can point to a genuinely new delivery model that eliminates the core responsibilities of building software fast, safely, and under uncertainty.

AI changes speed and redistributes responsibilities, and who does the work. It does not remove the obligation to make sound decisions, validate rigorously, and operate reliably. Accountability has not changed.

So yes, teams will shrink in some contexts. Roles will rebundle. Titles will evolve. “Builders” will become a common identity. AI agents will take on more implementation, research, and drafting. But the delivery foundation remains.

And if you’re tempted to post “Agile is dead,” I’ll offer a challenge instead: Tell me what will replace an organization’s benefit of becoming more agile? Or which responsibility set disappeared? Or tell me you’re really talking about theater.

Either way, we’ll have a more honest conversation.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

References

  • Moving away from Agile: What’s Next – Martin Harrysson & Natasha Maniar, McKinsey & Company (YouTube).

Filed Under: Agile, AI, Leadership, Product Delivery, Software Engineering, Value Stream Management

AI Fluent, Fundamentally Lost

December 7, 2025 by philc

The Dual Bar for Hiring in 2026

4 min read

Last week, Gene Kim and Steve Yegge published a piece on vibe coding titled Hiring in the Age of AI: What to Interview For.1 Their central question is one every engineering leader must confront. If AI has reshaped how software is built, how should we evaluate talent today.

They argue that modern interviews must identify candidates who have embraced AI, engineers who can prompt, manage context, and direct tools toward outcomes. I agree. But this view overlaps with a concerning pattern I described in my recent article, When AI Isn’t Enough.2

We are at a crossroads where two truths coexist: AI fluency is no longer optional, but it is not enough to make someone an engineer.

The “AI Crutch” Phenomenon

In recent software engineering interviews, I’ve noticed a recurring pattern. Candidates breeze through screens using AI assistants, producing clean, working code. But the moment the conversation shifts to fundamentals, they collapse.

In one instance, a candidate couldn’t explain why they chose composition over inheritance in the code they had just generated. The code was solid, but the engineer lacked a mental model of why it worked or what would break if the requirements changed.

This was a lack of foundation. AI had become a crutch, allowing them to produce strong output while masking a hollow understanding of the system.

The Great Divergence: Acceleration vs. Noise

A pattern is emerging across the industry. Software engineering is splitting into two groups, and the results are counterintuitive.

Group 1: The Architects. Senior engineers (and those with strong instincts) are achieving massive productivity gains. They can guide AI, spot hallucinations, and explain clean architecture to the tool. For them, AI is an accelerator.

Group 2: The Prompters. Engineers without fundamentals are actually getting slower. They cannot evaluate the AI’s suggestions. When the model drifts, they lack the intuition to course-correct, turning the tool into noise rather than augmentation.

This second group creates a hidden enterprise risk: The Glass Cannon.

They build systems that look impressive and powerful but shatter under the pressure of real-world constraints. The risks are invisible at first, but devastating over time:

  • The Black Box Problem: Because they cannot explain their own output, they treat their code as a third-party library. When it breaks, recovery time skyrockets.
  • Debt at Machine Speed: They may ship features, but they generate technical debt at an accelerated rate. They cannot optimize for cloud costs, architecture, performance, resilience,or spot silent security vulnerabilities because they assume “working” means “correct.”
  • Team Burden: They shift significant pressure onto team or senior engineers who must catch flawed designs, brittle patterns, and AI driven errors during code reviews.

This shifts the cost of software development from creation (which becomes cheap) to maintenance (which becomes prohibitively expensive).

The Dual Bar for Modern Talent

Effective hiring in 2026 requires us to stop picking one lens over the other. We must test for The Dual Bar:

  1. Can the candidate reason through a problem without the aid of AI? (To ensure they aren’t building glass cannons.)
  2. Can they intentionally use AI to accelerate their work? (To ensure they remain competitive.)

We aren’t hiring for what AI might be able to do in 2030. We are hiring for what teams need to ship and maintain now. That requires a new hiring rubric.

A New Hiring Model

To surface the engineers who can think, not just the ones who can prompt, consider your interview process around these five signals:

  • Fundamentals: Test this with at least one session where AI tools are off the table. Focus on fundamentals, design reasoning, and trade-offs, not syntax recall.
  • AI Fluency: Ask them to walk through a recent AI-assisted project. How did they prompt? How did they debug model mistakes? Or have them work through a challenge in real time using AI on a shared screen.
  • Communication: In an AI world, muddled explanations lead to muddled prompts. Can they articulate technical context with precision?
  • Systems Thinking: Present a scenario with competing trade-offs (e.g., latency vs. consistency). See if they can connect decisions to the broader architecture.
  • Curiosity: Ask what they’ve experimented with in the last 90 days. Engineers thriving in this era are climbing the learning curve with intention.

Acceleration vs. Illusion

There is a fine line between acceleration and illusion. If we hire based on the wrong signals, we risk building teams with strong output but weak understanding.

The current generation of great engineers will be those who use AI as a collaborator, not a substitute for thinking. They will use these tools to amplify their strengths rather than hide their gaps.

The question every leader should ask now: Does our interview process surface the engineers who can think, or just the ones who can prompt?

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Kim, G., & Yegge, S. (2025, December 1). Hiring in the Age of AI: What to Interview For. IT Revolution. https://itrevolution.com/articles/hiring-in-the-age-of-ai-what-to-interview-for/
  2. Phil Clark. (2025, November 29). When AI Isn’t Enough. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

Filed Under: AI, Engineering, Leadership, Software Engineering

When AI Isn’t Enough

November 29, 2025 by philc

Why Fundamentals Still Matter in an AI-Shaped Engineering World

6 min read

In the past year, I’ve noticed a shift in how engineering candidates present themselves. A senior director on my team recently began interviewing for a critical backfill.

On paper, the candidates were strong. In the early rounds, several performed exceptionally well, with clean solutions, fast iterations, and confident code. But once the conversation moved from what they could produce with AI to what they understood without it, everything changed.

The same candidates who looked senior-level on a coding screen suddenly struggled with composition, inheritance, architectural tradeoffs, or the purpose behind common design patterns. They weren’t nervous. They didn’t know.

And that’s when a deeper leadership question emerged, one that every software engineering leader I’ve spoken with over the past year is now wrestling with:

What does it mean to be a software engineer when AI can write much of the software?

The Illusion of Mastery

We’ve been pushing AI adoption in our organization since early 2023. Not because it was trendy, but because it was obvious where the future was heading. Over the summer, we doubled down on AI literacy, aiming to have every engineer use these tools comfortably and confidently by year’s end.

The early days were rocky. Engineers said the tools slowed them down. The suggestions lacked context. Resetting instructions became a ritual. Reviews took longer, not shorter, because the generated code wasn’t always correct; it only looked correct. That friction turned out to be a necessary phase.

Once engineers learned how to provide context, prompt effectively, and evaluate output, their productivity didn’t just improve; it multiplied. AI amplifies skill; it does not create it. And that dynamic is now playing out across many hiring pipelines.

Do Fundamentals Still Matter?

A school of thought is gaining momentum in the industry. I’ve heard it from candidates, managers, and even a few senior leaders:

“If you can ask AI the right questions, do you really need to understand the underlying concepts?”

It’s a tempting idea. AI can explain patterns. It can suggest architecture. It can generate code that appears correct and often is.

In specific roles, rapid prototyping, experimentation, and early-stage product exploration may be enough. But anyone who has owned an enterprise system knows the distinction: A proof of concept is not a production system.

In the world of prototypes, speed wins; in the world of enterprise platforms, correctness, reliability, durability, and performance win. The gap between the two is everything.

The New Hiring Reality: AI Is Distorting the Signal

AI has blurred the lines between junior and senior skill, at least at first glance.

Depending on your interview workflow, AI-assisted candidates often perform exceptionally well in early rounds. The solutions come fast. The code reads cleanly. The abstractions look polished. If you’re not paying attention, it’s easy to mistake output for understanding.

But when the conversation shifts to architecture, reasoning, debugging, or explaining why something works, the floor sometimes drops out.

This is not a candidate problem so much as an ecosystem problem. Our traditional hiring processes were not designed for a world where AI can mask gaps in foundational knowledge.

One candidate our director interviewed solved coding problems flawlessly with AI assistance, but could not explain the difference between inheritance and composition. He had mastered the tool, not the craft.

And that raises another concern, one that many CTOs and senior technology leaders now whisper privately: AI is enabling people to appear more capable than they actually are.

AI-Enabled Deception

We’re beginning to see cases where individuals use AI not just to enhance competence, but to manufacture the appearance of it.

Some candidates have used AI to pass interviews, screening rounds, and background checks, only to contribute little or no meaningful work once hired. I know of firsthand examples where someone worked just long enough to collect paychecks before disappearing.

The reality is that, in a screen-shared interview, candidates can quietly lean on second-monitor tools or even AI “whispers.” Everything looks legitimate, yet the candidate may be receiving real-time assistance you cannot detect. Our previous trust assumptions in technical interviews no longer reflect the capabilities of modern tools.

This Is Where Fundamentals Matter Again

Fundamentals matter, not out of nostalgia, but because high-performing systems demand them. Enterprise systems break in ways that require:

  • context
  • judgment
  • intuition
  • analytical reasoning
  • pattern literacy
  • understanding of failure domains
  • the ability to debug what AI got wrong

AI will increasingly diagnose issues before humans get involved. But evaluating whether the fix is correct still requires someone who understands the system beneath the abstraction.

Without fundamentals, engineers become dependent on AI. With fundamentals, engineers become exponentially more effective. That distinction is not negotiable.

Accountability Hasn’t Changed

A subtle misconception is emerging: if AI generated the code, responsibility shifts. It does not. Teams remain fully accountable for every line they push to production, AI-assisted or not. And at least for now and the near future, nothing about AI’s current capabilities changes that.

AI does not dilute ownership. AI does not absorb blame. AI does not change the duty of care.

If an engineer cannot explain the code they are committing, they are not ready to commit it. And if a team cannot reason about how a change behaves under load, in failure, or across distributed components, the team is not ready to own that system.

This isn’t theoretical. AI-generated code is already introducing subtle regressions, brittle logic, and incorrect assumptions. When teams ship code they don’t fully understand, failures become harder to diagnose and recover from.

Ambiguity around ownership is the fastest way to erode reliability.

Fundamentals preserve accountability. They allow engineers to validate, challenge, and harden AI-generated output with the same rigor expected of human-written code. Most importantly, they prevent teams from outsourcing judgment, the one responsibility no tool can assume.

In the current AI era, fundamentals serve as guardrails that keep systems reliable and teams accountable.

Rethinking What We Evaluate

If we expect engineers to use AI, and we should, then interviews must evolve to focus on what AI cannot conceal. These include architectural reasoning, debugging skills, the ability to assess and challenge AI-generated output, design intuition, system-level thinking, and the ability to explain decisions before writing code.

Engineers still need a strong command of foundational concepts that AI frequently mishandles. They must understand how data structures and algorithms affect performance and scalability, and how memory and state behave in real production environments. They should know core software design principles such as encapsulation, composition, immutability, and functional patterns, which guide how systems are structured and maintained.

They also benefit from fluency in common design patterns and the judgment to apply them responsibly. They need a clear grasp of APIs, contracts, and system boundaries, as well as how architectural choices play out in distributed, event-driven, and microservice-based environments. They must be able to reason about concurrency, consistency models, failure scenarios, and performance bottlenecks, areas where AI-generated code frequently introduces subtle bugs.

Finally, they require strong testing, debugging, and diagnostic skills. Engineers must be able to interpret logs, metrics, traces, and behavioral patterns to understand what software is actually doing rather than relying solely on what an AI claims it should do.

For now, these skills are what set high-performing, AI-capable engineers apart.

The Bottom Line

AI is transforming software development at a pace we haven’t seen since the shift from on-prem systems to the cloud. But speed introduces its own risks. Leaders must now answer a question that will define the next decade of engineering:

Do we want teams that generate code with AI, or teams that understand, validate, and elevate what AI produces?

Because in proofs of concept, AI might be enough. In enterprise systems, where durability, reliability, and trust matter, misunderstanding comes at a cost. AI is an extraordinary amplifier. Fundamentals remain the stabilizer.

Engineering organizations that insist on both will build the most resilient and competitive systems in the years ahead.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: AI, Engineering, Leadership, Software Engineering

When the System Fits, the Product Operating Model Works

November 27, 2025 by philc

9 min read

In every conversation about product delivery, team structures, and operating models, one pattern always stands out: there is no single correct structure for a modern software organization.

Leaders make decisions based on their architecture, constraints, history, and the goals they want to achieve. That is why we see so much variation across companies. Some organizations thrive with smaller, long-lived, self-managed cross-functional teams aligned to clear domains. Others depend on larger engineering manager-led groups, shared capability teams, or more centralized arrangements. These differences are not failures. They are the result of leaders shaping systems around their specific context.

My own experience has shown the strength of a particular combination: small, autonomous, cross-functional, long-lived product teams operating within a clear boundary, supported by Team Topologies thinking, Agile practices, DevOps and continuous delivery, Value Stream Management, and the Product Operating Model.

When these elements align with the architecture and constraints of the environment, they create clarity, flow, and accountability. When they do not, the same practices that thrive in one environment can struggle in another. The operating model only performs when the system beneath it supports it.

That is why I appreciated Thorsten Speil’s recent LinkedIn article on the Product Operating Model. He captured many of its strengths and also surfaced the areas where interpretation varies, including team size, organizational implications, discovery practices, and the broader operational impact of shifting to a product-oriented way of working. His post brought these nuances back into focus and highlighted how easily good ideas get misunderstood once they spread across different companies and contexts.

Two themes resurfaced during the discussion. They do not reflect issues with Thorsten’s article, but they are common points of confusion across the industry and worth exploring more deeply.

Misunderstanding 1: Marty Cagan is recommending larger teams

This belief usually comes from surface-level summaries rather than the substance of the work. In his book Transformed, Marty Cagan does not argue that big teams are inherently better. He is arguing against dividing teams into narrow technical slices that leave them unable to deliver value without coordinating across several other groups.

When a team owns only a small fragment of the flow, such as just the UI or database layer, its success depends on the progress of others. Ownership becomes diluted, and dependencies increase.

The real question is not whether a team is “small” or “large.” It is whether the team owns a complete slice of value: a domain or subdomain, or a coherent value stream, that it can deliver with minimal coordination.

In the organizations I’ve worked with, when we refactored monolithic or tangled systems and clarified domain boundaries, teams often became smaller, not larger, but crucially, they became whole and autonomous. What changed was their completeness, not just headcount.

What really determines the right team design is context, the architecture, domain boundaries, cognitive load, subject-matter expertise requirements, and the way work and value flow across the system.

If a subdomain or product in a portfolio is large enough and demands sustained work, a dedicated team may make sense. If several small subdomains or products share architecture or customer value, a single team or squad covering them together can reduce overhead. Team size and structure should align with system boundaries and value streams, not arbitrary org chart conventions.

Misunderstanding 2: The Product Operating Model replaces DevOps

These two ideas are sometimes mentioned together, but they address different layers of the organization.

DevOps improves the path from code to production. It strengthens feedback loops, automation, stability, and the ability to release safely and frequently. The Product Operating Model influences how decisions are made, how work is funded, how discovery and delivery are structured, and how teams are aligned to outcomes. It governs how strategy flows into teams.

One is about delivery performance. The other is about organizational direction. They are not interchangeable, and in a healthy system, they support each other. DevOps allows teams to learn quickly and respond rapidly. The Product Operating Model ensures that this capability is being applied to the right opportunities.

When organizations confuse the two, they end up with teams that can ship quickly but have no clarity on why, or teams that are empowered in theory but constrained by an outdated delivery path.

Where Value Stream Management fits

One of the most overlooked parts of the conversation is the role of Value Stream Management. Many organizations adopt the Product Operating Model with the right intentions, but without visibility into how work actually flows today. Value Stream Management provides that visibility. It shows where work gets stuck, where dependencies cluster, where priorities conflict, and where delays originate. It is the mechanism that connects architecture, team boundaries, and the customer journey into a single picture.

Without this visibility, a product-aligned structure becomes guesswork. Leaders cannot see the real bottlenecks, and teams cannot understand why autonomy feels out of reach. Flow metrics reinforce this visibility by making delays, load, efficiency, and distribution measurable. When VSM, flow metrics, and POM reinforce each other, teams gain stability and clarity. Ownership becomes real rather than symbolic.

The Product Operating Model also changes how work is funded

Another important idea that often gets overlooked is the shift in funding. The Product Operating Model is not simply a structural or cultural change; it changes how work is supported economically.

Instead of funding projects on an annual cycle, organizations fund products and the teams responsible for them. Teams are long-lived rather than assembled and disbanded. Prioritization is continuous rather than fixed once a year.

Outcomes replace scope as the primary measure of progress, and domain expertise becomes a long-term asset. Stable teams and stable funding reinforce each other and create an environment where real ownership and long-term accountability can thrive.

Architecture enables team autonomy

It is common to talk about rapid delivery, continuous discovery, and empowered teams, but none of these is possible unless the architecture supports them.

If components are tightly coupled, if deployments require several approvals, or if core systems or data are shared among many teams, autonomy becomes difficult to implement regardless of intention. Organizational charts cannot compensate for technical constraints.

The most effective team topologies emerge from systems with clear domain boundaries, separation of concerns, modularity, and platform capabilities that reduce cognitive load. When architecture and team design reinforce each other, teams can own outcomes. When they conflict, coordination overhead grows, and autonomy becomes harder to achieve.

Architecture choices shape, but do not dictate, the model

I often advocate for distributed systems and microservices because they reduce dependency load and allow teams to operate with greater independence. But that does not mean these architectures are right for every organization. Modular monoliths, macroservices, domain-oriented monoliths, and hybrid models can all support effective product teams when their boundaries are clear and consistent.

What matters most is that the architecture supports meaningful ownership. I have seen monolithic systems with strong modular structure outperform poorly partitioned microservices because the boundaries were more deliberate.

The Product Operating Model does not require microservices. It requires coherent ownership aligned with the architectural reality.

A monolithic system can still operate effectively under a Product Operating Model when teams have clear ownership boundaries. The fundamental idea behind the Product Operating Model is organizing around outcomes and customer value rather than technical layers.

Teams need responsibility for a meaningful, end-to-end part of the product, not just a narrow slice of the stack. When a monolith is structured with deliberate domain separation and disciplined layers, teams can still take ownership of specific product areas or value streams and make decisions within those boundaries.

At the same time, monolithic systems often introduce more coordination requirements. Shared code paths, tightly coupled components, and synchronized releases can create friction and increase dependency load. These challenges do not prevent the Product Operating Model from working, but they require more intentional communication, clearer boundaries, and stronger agreements around how teams collaborate inside the monolith.

The architecture does not have to be perfect; it simply needs to support coherent ownership. The clearer the system’s internal structure, the easier it is for teams to operate end to end without excessive coordination.

This is why context matters. The Product Operating Model succeeds when the system enables teams to own outcomes, regardless of whether the underlying architecture is a monolith, a modular monolith, or a distributed set of services.

Why context matters

Organizations often begin by asking whether they should adopt the Product Operating Model. A better question is what their current system allows and where the real constraints are.

You can adopt a Product Operating Model in a monolithic architecture, and many companies do. What matters most is whether teams can own meaningful areas of the product, make decisions with limited friction, and deliver improvements without excessive dependencies. Some monoliths support this quite well, particularly when structured with clear domain boundaries. Others are so tightly coupled that autonomy is difficult until parts of the system are modernized.

The model itself is rarely the constraint. The system and its boundaries are. Most failed transformations happen not because the Product Operating Model is flawed, but because leaders apply it without understanding the environment that must support it.

The real work is creating the conditions for POM to succeed

Organizations that succeed with the Product Operating Model share several characteristics. Their architecture supports autonomy. Their value streams are visible. Flow metrics guide decisions. Team structures match real domain boundaries. DevOps practices are mature enough to support rapid learning and delivery. And product, design, and engineering operate together as one system.

In these environments, the Product Operating Model does not feel like a framework. It is the natural way the organization should operate. It aligns people, technology, and strategy into a coherent system and gives teams the conditions they need to take real ownership.

What Really Determines Whether POM Succeeds

Most debate about the Product Operating Model focuses on whether it is the right model. That is not the most helpful place to begin. The more important question is whether the system can support long-term product ownership and sustained team autonomy.

The Product Operating Model is not only a team structure. It is a commitment to funding products rather than projects, supporting teams for the lifespan of the product, building and retaining domain expertise, prioritizing work continuously instead of annually, and evaluating progress through outcomes rather than activity. When these elements are combined with modern architecture, visibility into flow, and strong DevOps practices, the Product Operating Model becomes a practical and natural way to operate. Teams can own their work end-to-end and connect what they build to real customer value.

When organizations attempt to adopt the model without making these underlying adjustments, POM struggles. Team boundaries feel artificial, ownership breaks down, and delivery becomes a ceremony rather than a learning experience.

The more productive question is not whether to adopt the Product Operating Model, but rather how to do so. The practical question is what needs to change in the architecture, the flow of work, the funding model, and the team design so that a product-oriented way of working can thrive in this environment.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References and Further Reading

This article draws on ideas and practices that have shaped modern product development, organizational design, and software delivery. For readers who want to explore the concepts more deeply, the following works provide useful context.

Thorsten Speil – “You need to move to the Product Operating Model! … Really?” (2025), https://www.linkedin.com/pulse/you-need-move-product-operating-model-really-whats-thorsten-speil-2mhcf/
The original post that inspired this article and sparked a thoughtful discussion on how organizations interpret and apply POM principles in different contexts.

Marty Cagan – Transformed (2024)
Clear articulation of the Product Operating Model and the organizational conditions needed to support empowered product teams.

Matthew Skelton and Manuel Pais – Team Topologies
Guidance on service-aligned team structures, interaction modes, cognitive load, and organizational boundaries that support flow.

Value Stream Management Consortium – Project to Product Reports (2023–2024)
Industry research on flow metrics, product funding, and how organizations connect technology investments to actual business outcomes.

Dr. Nicole Forsgren, Jez Humble, and Gene Kim – Accelerate
Evidence-based insights into DevOps, continuous delivery, feedback loops, and the capabilities of high-performing engineering organizations.

Steve Pereira and Andrew Davis – Flow Engineering
Practical mapping techniques for visualizing system constraints, dependencies, and opportunities to improve value flow.

Eric Evans – Domain-Driven Design
Architectural foundations for creating clear domain boundaries that support coherent ownership in product-aligned teams.

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

Why Value Stream Management and the Product Operating Model Matter (and What Comes Next)

November 5, 2025 by philc

6 min read

I had the opportunity to revisit my January article and refine its key points for a recent Flowtopia.io post.

Seeing the Why Behind the Frameworks

In 2021, as part of our evolving Agile transformation, I introduced Value Stream Management (VSM) and later championed the Product Operating Model (POM). Yet I never clearly articulated why these practices mattered.

Looking back, we had already been moving toward a product-oriented model long before naming it. Cross-functional product teams operated organically but without shared governance. When capacity pressures mounted, priorities blurred and inefficiencies surfaced, showing that alignment and communication of purpose are as essential as the frameworks themselves.

Inside my own organization, alignment lagged. Technology advanced rapidly, and engineers and Agile Leaders embraced flow metrics and value-stream thinking, while the product function remained loosely engaged. Without clear accountability, the message fractured: technology optimized for flow; product managed for capacity. The gap limited our ability to realize the frameworks’ potential.

This imbalance is common. Most organizations face more work than they have capacity for, making prioritization and a focus on outcomes essential. VSM and the Product Operating Model address this directly, aligning teams, optimizing workflows, and ensuring that every hour of capacity contributes to real value.

“Adopting frameworks isn’t enough; leaders must over communicate their purpose.”

The Turning Point: When Efficiency Isn’t Enough

Every transformation reaches a moment of truth. You automate more, deploy faster, and report higher output, yet business leaders still ask, “How are our investments being utilized?”

The disconnect isn’t about effort or talent, but about visibility. Most digital organizations struggle to clearly understand how knowledge work flows or how investments in Scrum, Kanban, DevOps, automation, and now AI impact performance. Teams, in turn, can’t see how their daily work ties to customer or business outcomes.

That’s where VSM and POM intersect, two complementary frameworks that connect flow, alignment, and outcomes. Both emerged from the same realization: efficiency alone is insufficient. Without linking how value flows to what outcomes it creates, organizations risk optimizing for motion instead of progress. Sustaining expertise and funding across a product’s lifespan, rather than through short-term projects, produces better results.

From Projects to Products

For decades, technology operated as a cost center measured by utilization and velocity. Projects were funded, staffed, delivered, and dissolved. The product model reversed that logic.

By aligning long-lived teams around customer and business outcomes, organizations create real ownership and continuity. Teams become responsible not just for delivery, quality, and security, but for the outcomes they produce.

Economic accountability strengthens this model. In a product-funded operating structure, long-lived teams contribute to sales and growth, but they also influence the margins those products generate. That requires understanding more than top-line revenue. Teams should know their cost of goods sold (COGS): the direct costs, licenses, labor, implementation effort, and other team expenses that determine the actual cost of delivering and supporting the product.

When teams are evaluated on margin contribution rather than throughput or feature count, the dynamic changes. Ownership deepens. The definition of value expands. Financial discipline becomes part of everyday decision-making.

This also creates new complexity. Accountability and funding are no longer as simple as “get the code out.” They become “deliver a product customers will buy, at a margin the business can sustain.” For many organizations, this is far harder than shipping features, especially when teams are short-lived, responsibilities overlap, or cost allocations remain unclear.

But this discipline is one of the most powerful levers for turning the Product Operating Model from a framework built for speed into one built for sustainable value. It does not push teams back into a cost-center posture. Instead, it gives them the visibility to understand how Flow, outcomes, and customer success connect directly to profitability.

In our case, context switching dropped. Developers embedded in single domains became accountable for both flow and customer outcomes. Priorities shifted faster, decisions stayed within teams, and purpose became clearer. When people see how their work creates value, metrics stop being abstract and become insights for improvements; they start to matter.

Context Is Everything

“There is no one-size-fits-all approach to transformation. The true power of frameworks like VSM and POM lies in their flexibility to serve as blueprints rather than rigid rules.”

Adoption succeeds only when frameworks align with an organization’s structure, culture, and leadership context. Models fail not by design but by misapplication. That’s why effective organizations start by seeing their system before changing it.

Value Stream Mapping provides visibility, showing how work moves, where it slows, and how efficiently it reaches customers. Flow Engineering practices, such as Outcome Maps, Current-State Maps, and Dependency Maps, enable leaders to visualize how work, teams, and dependencies interact. These visualizations reveal friction, conflicting priorities, and hidden handoffs that delay the realization of value.

“Visibility creates alignment. Alignment establishes the foundation for improvement.”

The 2024 Project to Product State of the Industry Report confirms that elite organizations don’t just implement frameworks; they adapt them to fit their structure and customer context. That adaptability turns adoption into transformation.

Flow and Realization: The Two Sides of Value

Every delivery system operates in two dimensions:

Flow – how efficiently value moves.

Realization – how effectively that value produces business or customer outcomes.

Most organizations measure one and overlook the other or treat them as separate conversations.

Flow metrics, including Flow Time, Velocity, Efficiency, Distribution, and DORA metrics, reveal system health but not its impact.

Realization metrics, retention, revenue contribution, and time-to-market, show outcomes but not efficiency.

“Flow transforms effort into movement; realization transforms movement into impact.”

The 2024 Project to Product Report found that fewer than 15% of Organizations integrate flow metrics with business outcomes. Yet those that do so outperform their peers on both speed and customer satisfaction.

Measuring Across Layers

Metrics operate across three layers:

• System Layer: Flow & DORA metrics reveal delivery efficiency.

• Team Layer: Developer Experience (DX) and sentiment show team health.

• Business Layer: Realization metrics link work to outcomes.

Connecting these layers turns measurement into meaning and prevents metric theater, reporting what’s easy instead of what matters.

Leadership and Structure: The Missing Link

Even the best frameworks fail without a shift in leadership. Adopting VSM and POM means transitioning from a command-and-control approach to one of clarity, from managing tasks to managing systems.

Delegation and empowerment become strategic levers. Leaders define and communicate outcomes and boundaries; teams own delivery, quality, and learning within them. Guided by data-driven feedback, they experiment and improve.

The best teams treat flow and realization as continuous feedback loops, a living system that evolves with every release.

Governance through transparency replaces micromanagement. Dashboards enable leaders to coach, rather than control, by focusing on flow, bottlenecks, and opportunities. Empowerment is a shared ownership of outcomes.

A mature value-stream culture recognizes that leadership doesn’t disappear, but evolves. The leader’s job is to design the system where great work happens, not be the system itself.

What Comes Next: Amplification Through AI

Organizations often ask, “What’s next?”

The answer is amplification, using technology, data, and AI to accelerate insight and learning.

AI doesn’t change your system; it magnifies it. If your processes are slow, AI exposes that faster. If your system is healthy, it enhances visibility, identifies bottlenecks, and predicts where investment yields the highest return.

The future of AI in VSM is about augmenting human judgment, not replacing it. Intelligent automation links flow metrics to outcomes, detects deviations early, and surfaces recommendations that leaders can act on in real-time. This evolution expands the leader’s role once again, from observer to orchestrator of improvement.

Bridging Technology and Business Value

My ongoing focus is strengthening the connection between technology execution and business outcomes, a lesson shaped by feedback from an executive 360-degree assessment: “You should focus more on business results as a technology leader.”

That insight was right. We transformed from a monolithic architecture and waterfall process into a world-class Agile, microservices-based organization, yet we hadn’t consistently shown how that transformation delivered measurable business results.

To close that gap, we’re developing tools that make value visible:

• Value Stream Templates to connect work with business objectives.

• Initiative & Epic Definitions emphasizing outcomes and dependencies.

• Team-Level OKRs tied to measurable business priorities.

• Knowledge Hub Updates highlighting outcomes over outputs.

The 2024 Project to Product Report found that organizations that consistently link delivery, metrics, and business outcomes outperform their peers in terms of agility, profitability, and retention.

“The answers reveal whether your organization is optimizing activity or enabling value.”

The Real Transformation

When combined, VSM and POM unlock a higher level of capability. They teach leaders to see how work flows, how people collaborate, and how outcomes drive real impact.

When you see work as a flow of value rather than a measure of effort, you stop managing activity and start leading outcomes.

That’s the actual transformation, shifting focus from what we deliver to what difference it makes.

“The time to act is now. Let’s lead purposefully, ensuring our teams deliver meaningful, measurable value in 2026 and beyond.”

Transformation is never solitary; shared understanding across our industry is where alignment begins.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. The 2024 Project to Product State of the Industry Report, Planview, https://info.planview.com/project-to-product-state-of-the-industry-_report_vsm_en_reg.html
  2. Why Value Stream Management and the Product Operating Model Matter, Rethink Your Understanding, https://rethinkyourunderstanding.com/2025/01/why-vsm-and-the-product-operating-model-matter/

Filed Under: Agile, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 9
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.
Content reflects general leadership experience. Examples and details may be generalized to protect confidentiality.

  • Home
  • Mission
  • Collaboration
  • AI
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact