• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Mission
  • Collaboration
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact

Leadership

So, What Does a VP of Software Engineering Do?

August 21, 2025 by philc

7 min read

This article began as a response to a colleague in the industry, Patrice Corbard, a DevOps advisor, trainer, and author in France, who asked me a simple but difficult question:

“Can you describe what you consider to be the most important jobs-to-be-done in your role as VP of Engineering, as well as the pains and gains, ranked in order of importance?”

It’s a fair question. If you search online, you’ll find plenty of job descriptions and responsibility lists. What you won’t find is a candid look at the role from the inside, what you’re accountable for, what makes the job rewarding, and what makes it challenging.

My answer to Patrice became the inspiration for this article. What follows isn’t a universal definition of the VP of Engineering role. It’s how the role has applied to me, shaped by my experience, the transformations I’ve led, the leadership cultures I’ve worked within, and the context of our change initiatives, growth, and acquisitions in a highly evolving digital industry.

When most people ask me what I do as a VP of Software Engineering, they sometimes expect a simple answer: “You lead engineers, right?”

The truth is, the role of VP of Engineering isn’t black-and-white. It depends heavily on:

  • The size and stage of the company
  • The leadership culture you operate within
  • The context of the organization

What I can share is my journey, how the role has evolved for me, what I’ve been held accountable for, and what it’s been like.

My Journey to VP of Engineering

When I joined Parchment over a decade ago, I brought over a decade of enterprise software engineering, architecture, and leadership experience.

At Parchment, the engineering teams were still operating in waterfall silos. The organization just started moving to Agile and Scrum ceremonies. Delivery was slow, fragile, and disconnected from business outcomes.

My earliest accountability as a Director of Engineering was helping engineering transform with highly skilled, passionate, open-minded team members, moving us toward Agile, Lean, and DevOps practices that allowed us to ship with confidence.

The shift wasn’t simply about adopting new frameworks, but demanded a deeper transformation. It required me, my team, and many leaders around me to learn, unlearn, and relearn. To lead effectively, I had to embody humility and set the standard through my own actions.

Over time, my role expanded as the organization scaled. What started with a few dozen engineers as a Director eventually grew to more than 175 people across 10 countries as a VP of Engineering. And with that growth, the scope of my responsibilities shifted.

Four Pillars of Accountability

Looking back, I can summarize my VP responsibilities into four enduring accountabilities:

1. Enterprise-Level Software Quality and Resilience

  • Ensure stability and reliability of delivery
  • Support automation initiatives and efforts, shorten lead times
  • Use flow metrics to measure and improve

2, People Engagement

  • Without engaged teams, delivery grinds to a halt
  • Engagement comes from psychological safety, inclusion, autonomy, purpose, and leadership that people trust

3. Retention and Development

  • Attracting great talent is hard; retaining them is harder
  • Build career frameworks, coach managers, and provide growth opportunities
  • Much of my time went into developing engineers who had just stepped into leadership

4. Skills and Capabilities

  • Keep teams competitive in today’s tech landscape
  • Don’t chase every shiny tool, but invest in learning, experimentation, and the right capabilities

Everything else I did, adopting Value Stream Management (VSM), integrating AI copilots, partnering with Product, aligning with Finance, flowed back into these four pillars.

Balancing Global Talent

Another dimension of the VP role is managing the distribution and cost of talent. U.S. hiring alone can’t always scale, so part of my responsibility was building a model that included nearshore, offshore, and local teams.

Sometimes that meant intentionally diversifying where and how we hired. Other times, it meant adapting through acquisitions in new geographies, inheriting engineering teams with their own culture, practices, and expectations.

In both cases, the challenge wasn’t just financial. It was about creating alignment across different regions, time zones, and cultures, while still building one cohesive engineering organization.

Getting this right was critical not only to scaling sustainably but also to retaining talent and protecting delivery resilience as the company grew globally.

Beyond Delivery: Transformation and Business Alignment

The VP role isn’t only about keeping the trains running.

I was deeply involved in:

  • Technical due diligence in acquisitions
  • Aligning metrics with business outcomes
  • Contributing to valuations during funding rounds and ownership changes

It also meant championing long-term transformation strategies:

  • Moving from waterfall to Agile, Lean, DevOps, and Continuous Delivery
  • Adopting Value Stream Management for end-to-end visibility
  • Driving AI literacy and adoption across engineering

AI adoption is about building a culture of learning, experimentation, and practical adoption so teams build real capability.

One truth I learned: engineering only matters if it’s connected to the growth engine of the business. Otherwise, it gets treated as a cost center.

VP of Engineering vs. CTO

I’m often asked: “What’s the difference between a VP of Engineering and a CTO?”

From my experience:

  • CTO puts technology first, people second. They set the vision, connect strategy to growth, and influence investors.
  • VP of Engineering puts people and practices first, technology second. My job is to build engaged teams and strong delivery systems so the strategy is executed at scale.

Both roles are essential. One is about what we bet on. The other is about how people and systems deliver it.

Leadership Culture Shapes the Role

Another factor that defined my journey was who I reported to and the leadership culture around me.

  • For most of my tenure, I reported to a leader who gave me autonomy and trusted me. Those years were expansive; we built team autonomy, focused on improving delivery cadence, agility, and flow, and made measurable progress.
  • When that leader retired, a new CTO arrived. He spoke Agile but led with command-and-control habits. It clashed with our progress and felt like a hand grenade in the middle of our transformation.
  • Later, after an acquisition, a VP of Product replaced the CTO and owned both Product and technology. Our philosophies diverged, but where we were aligned, in people and culture, we found common ground.

The lesson: your autonomy and alignment with peers and superiors shape the job.

One of the most underestimated jobs-to-be-done for a VP of Engineering is this: setting and sustaining long-term strategy, digital transformation, Agile, VSM, team outcomes and performance feedback, and building competitive advantage through culture and delivery. But here’s the catch: a change in senior leadership above you can accelerate that strategy, or derail it overnight.

From T-Shaped to V-Shaped Skills

A VP of Engineering can’t stand still.

Early on, I had strong T-shaped skills, depth in engineering, and breadth in adjacent areas. But to operate at the executive level, I had to develop what I call V-shaped skills: depth in engineering plus meaningful depth in several other domains.

That meant deliberate, ongoing investment in learning:

  • Scaling organizations: team topologies, value streams, spans of control
  • Strategy and OKRs: translating strategy into objectives and results
  • Funding, M&A: diligence, integration, and how maturity shows up in valuation
  • Thinking like a CEO: runway, margins, growth levers, complex tradeoffs
  • Product management: enough depth to partner with product leaders
  • Finance fluency: COGS, OPEX, ROI, metrics that tie tech to earnings
  • Modern architecture & technology: staying credible without micromanaging
  • Leadership craft: books, workshops, conferences, sharpening coaching and communication

It also meant mentoring beyond engineering. In 2024, I participated in our Women in Leadership program, coaching a developing leader. Supporting leaders outside my org was a way to invest in a broader leadership fabric.

And it wasn’t just about formal learning. My success was shaped by mentors and the network I built both inside and outside my organization. Collaborating with senior executives in other companies helped me benchmark our progress, validate practices, and learn from both successes and failures. That external perspective was invaluable in shaping my decisions and accelerating transformation.

The Hard Parts

It isn’t all bright spots.

Being VP of Engineering also meant being accountable for cost-saving measures and layoffs. Those are the darkest days, balancing empathy with business realities while protecting trust and continuity as best you can.

The Highlights

But there are bright spots, too, the moments that make the hard parts worth it.

  • Contributing to a team driving significant organizational growth.
  • Watching team members progress and grow into leaders, contributors, and mentors themselves.
  • Seeing the organization thrive and succeed because of the engineering team’s partnership with product and business.
  • Having the opportunity to mentor others, both inside and outside engineering, and know you’re investing in the company’s leadership future.
  • Helping to build a culture that makes teams proud to come to work, where people feel connected, trusted, and valued.
  • Having a direct impact on something bigger than you.

These are the outcomes that fuel purpose in the role and make the investment in people and practices pay off.

What I’ve Learned

So what does a VP of Engineering do?

  • Ensure software is reliable and resilient
  • Keep teams engaged and thriving
  • Retain and develop people with absolute growth paths
  • Invest in skills and capabilities so teams stay competitive
  • Lead transformation by learning, unlearning, and relearning
  • Align execution with business outcomes
  • Contribute to M&A, funding, and investor communication
  • Drive practices like AI adoption to build long-term capability
  • Navigate leadership cultures, reporting lines, and autonomy
  • Expand from T-shaped to V-shaped skills, supported by mentors and networks
  • Balance global talent through local hiring, nearshore/offshore models, and acquisitions in new geographies

And most of all, accept that the role is never static; it shifts as the company shifts.

Closing Thought

If you’re wondering what a VP of Engineering does, the only honest answer is: it depends.

It depends on the organization, its maturity, and the leadership culture. My story is just one version, shaped by digital transformation, scaling, global talent strategy, AI adoption, mentorship, and peer networks.

What hasn’t changed is this: the job is about building systems of delivery and leadership that last, systems that sustain people, products, and business value long after a single leader has moved on.

And remember: this is just a taste of how the VP of Engineering role has applied to me, in my organizations, and my context.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Engineering, Leadership, Software Engineering

AI in Software Delivery: Targeting the System, Not Just the Code

August 9, 2025 by philc

7 min read

This article is a follow-up to my earlier post, AI Is Improving Software Engineering. But It’s Only One Piece of the System. In that post, I explored how AI is already helping engineering teams work faster and better, but also why those gains can be diminished if the rest of the delivery system lags.

Here, I take a deeper look at that system-wide perspective. Adopting AI is about strengthening the entire system. We need to think about AI not only within specific teams but across the organizational level, ensuring its impact is felt throughout the value stream.

AI has the potential to improve how work flows through every part of our delivery system: product, QA, architecture, platform, and even business functions like sales, marketing, legal, and finance.

If you already have robust delivery metrics, you can pinpoint exactly where AI will have the most impact, focusing its efforts on the actual constraints rather than “speeding up” work at random. But for leaders who don’t yet have a clear set of system metrics and are still under pressure to show AI’s return on investment, I strongly recommend starting with a platform or framework that captures system delivery performance.

In my previous articles, I’ve outlined the benefits of SEI (Software Engineering Intelligence) tools, DORA metrics (debatable), and, ideally, Value Stream Management (VSM) platforms. These solutions measure and visualize delivery performance across the system, tracking indicators like cycle time, throughput, quality, and stability. They help you understand your current performance and also enable you to attribute improvements, whether from AI adoption or other changes, to specific areas of your workflow. Selecting the right solution depends on your organizational context, team maturity, and goals, but the key is having a measurement foundation before you try to quantify AI’s impact.

The Current Backlash and Why We Shouldn’t Overreact

Recent research and commentary have sparked a wave of caution around AI in software engineering.

A controlled trial by METR (2025) found that experienced developers using AI tools on their repositories took 19% longer to complete tasks than without AI, despite believing they were 20% faster. The 2024 DORA report found similar patterns: a 25% increase in AI adoption correlated with a 1.5% drop in delivery throughput and a 7.2% decrease in delivery stability. Developers felt more productive, but the system-level metrics told another story.

Articles like AI Promised Efficiency. Instead, It’s Making Us Work Harder (Afterburnout, n.d.) point to increased cognitive load, context switching, and the need for constant oversight of AI-generated work. These findings have fed a narrative that AI “isn’t working” or is causing burnout.

But from my perspective, this moment is less about AI failing and more about a familiar pattern: new technology initially disrupts before it levels up those who learn to use it well. The early data reflects an adoption phase, not the end state.

Our Teams’ Approach

Our organization is embracing an AI-first culture, driven by senior technology leadership and, additionally, senior engineers who are leading the charge, innovating, experimenting, and mastering the latest tools and LLMs. However, many teams are earlier in their adoption journey and can feel intimidated by these pioneers. In our division, my focus is on encouraging, training, and supporting engineers to adopt AI tools, gain hands-on experience, explore use cases, and identify gaps. The goal isn’t immediate mastery but building the skills and confidence to use these tools effectively over time.

Only after sustained, intentional use, months down the line, will we have an informed, experienced team that can provide meaningful feedback on the actual outcomes of adoption. That’s when we’ll honestly know where AI is moving the needle, and where it isn’t.

How I Respond When Asked “Is AI Working?”

This approach is inspired by Laura Tacho, CTO at DX, and her recent presentation at LeadDev London, How to Cut Through the Hype and Measure AI’s Real Impact (Tacho, 2025). As a leader, when I face the “how effective is AI?” debate, I ground my answer in three points:

1. How are we performing

We measure our system performance with the same Flow Metrics we used before AI: quality, stability, time-to-value, and other delivery health indicators. We document any AI-related changes to the system, tools, or workflows so we can tie changes in metrics back to their potential causes.

2. How AI is helping (or not helping)

We track where AI is making measurable improvements, where it’s neutral, and where it may be introducing new friction. This is about gaining an honest understanding of where AI is adding value and where it needs refinement.

3. What will we do next

Based on that data and team feedback, we adjust. We expand AI use where it’s working, redesign where it’s struggling, and stay disciplined about aligning AI experiments to actual system constraints.

This framework keeps the conversation grounded in facts, not hype, and shows that our AI adoption strategy is deliberate, measurable, and responsive.

What System Are We Optimizing?

When I refer to “the system,” I mean the structure and process by which ideas flow through our organization, become working software, and deliver measurable value to customers and the business.

Using a Value Stream Management and Product Operating Model approach together gives us that view:

  • Value stream: the whole journey of work from ideation to delivery to customer realization, including requirements, design, build, test, deploy, operate, and measure.
  • Product operating model: persistent, cross-functional teams aligned to products that own outcomes across the lifecycle.

Together, these models reveal not just who is doing the work, but how it flows and where the friction is. That’s where AI belongs, improving flow, clarity, quality, alignment, and feedback across the system.

The Mistake Many Are Making

Too many organizations inject AI into the wrong parts of the system, often where the constraint isn’t. Steve Pereira’s It’s time for AI to meet Flow (Pereira, 2025) captures it well: more AI output can mean more AI-supported rework if you’re upstream or downstream of the actual bottleneck.

This is why I believe AI must be tied to flow improvement:

  1. Make the work visible – Map how work moves, using both our existing metrics and AI to visualize queues, wait states, and handoffs.
  2. Identify what’s slowing it down – Use flow metrics like cycle time, WIP, and throughput to find constraints before applying AI.
  3. Align stakeholders – AI can synthesize input from OKRs, roadmaps, and feedback, so we’re solving the right problems.
  4. Prototype solutions quickly – Targeted, small-scale AI experiments validate whether a constraint can be relieved before scaling.

Role-by-Role AI Adoption Across the Value Stream

AI isn’t just for software engineers, it benefits every role on your cross-functional team. Here are just a few examples of how it can make an impact. There are many more ways for each role than listed below.

Product Managers / Owners

  • Generate Product Requirements Documentation
  • Analyze customer, market, and outcome metrics
  • Groom backlogs, draft user stories, and acceptance criteria.
  • Summarize customer feedback and support tickets.
  • Use AI to prepare for refinement and planning.

QA Engineers

  • Generate test cases from acceptance criteria or code diffs.
  • Detect coverage gaps and patterns in flaky tests.
  • Summarize PR changes to focus testing.

Domain Architects

  • Visualize system interactions and generate diagrams.
  • Validate design patterns and translate business rules into architecture.

Platform Teams

  • Generate CI/CD configurations.
  • Enforce architecture and security standards with automation.
  • Identify automation opportunities from delivery metrics.

InfoSec Liaisons

  • Scan commits and pull requests (PRs) for risky changes.
  • Draft compliance evidence from logs and release data.

Don’t Forget the Extended Team

Sales, marketing, legal, and finance all influence the delivery flow. AI can help here, too:

  • Sales: Analyze and generate leads, summarize customer engagements, and highlight trends for PMs.
  • Marketing: Draft launch content from release notes.
  • Legal: Flag risky language, summarize new regulations.
  • Finance: Model ROI of roadmap options, forecast budget impact.

Risk and Resilience

What happens when AI hits limits or becomes unavailable? Inference isn’t free; costs will rise, subsidies will fade, and usage may be capped. Do you have fallback workflows, maintain manual expertise, and measure AI’s ROI beyond activity? Another reason for us to gain experience with these tools is to improve our efficiency and understand usage patterns.

The Opportunity

We already have the data to see how our system performs. The real opportunity is to aim AI at the constraints those metrics reveal, removing friction, aligning teams, and improving decision-making. If we take the time to learn the tools now, we’ll be ready to use them where they matter most.

What Now?

We already have the metrics to see how our system performs. The real opportunity is to apply AI purposefully across the full lifecycle, from ideation and design, through development, testing, deployment, and into operations and business alignment. By directing AI toward the right constraints, we eliminate friction, unify our teams around clear metrics, and elevate decision-making at every step.

Yes, AI adoption is a learning journey. We’ll stumble, experiment, and iterate, but with intention, measurement, and collaboration, we can turn scattered experiments into a sustained competitive advantage. AI adoption is about transforming or improving the system itself.

AI isn’t failing, it’s maturing. We’re on the rise of the adoption curve. Our challenge and opportunity is to build the muscle and culture to deploy AI across the lifecycle, turning today’s experiments into tomorrow’s engineered advantage.

For anyone still hesitant, know this: AI isn’t going away. Whether it slows us down or speeds us up, we must learn to use it well, or we risk being left behind. Let’s learn. Let’s measure. Let’s apply AI where it’s most relevant and learn to understand its current benefits and limitations. There’s no going back, only forward.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

Afterburnout. (n.d.). AI promised efficiency. Instead, it’s making us work harder. Afterburnout. https://afterburnout.co/p/ai-promised-to-make-us-more-efficient

Clark, P. (2025, July). AI is improving software engineering. But it’s only one piece of the system. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/07/ai-is-improving-software-engineering-but-its-only-one-piece-of-the-system/

METR. (2025, July 10). Measuring the impact of early-2025 AI on experienced open-source developer productivity. METR. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Pereira, S. (2025, August 8). It’s time for AI to meet flow: Flow engineering for AI. Steve Pereira. https://stevep.ca/its-time-for-ai-to-meet-flow/

State of DevOps Research Program. (2024). 2024 DORA report. Google Cloud / DORA. (Direct URL to the report as applicable)

Tacho, L. (2025, June). How to cut through the hype and measure AI’s real impact. Presentation at LeadDev London.  https://youtu.be/qZv0YOoRLmg?si=aMes-VWyct_DEWz0

Filed Under: Agile, AI, DevOps, Leadership, Metrics, Product Delivery, Software Engineering, Value Stream Management

AI Is Improving Software Engineering. But It’s Only One Piece of the System

July 31, 2025 by philc

5 min read

A follow-up to my last post Leading Through the AI Hype in R&D, this piece explores how strong AI adoption still needs system thinking, responsibility, and better leadership focus.

Leaders are moving fast to adopt AI in engineering. The urgency is real, and the pressure is growing. But many are chasing the wrong kind of improvement, or rather, focusing too narrowly.

AI is transforming software engineering, but it addresses only one part of a much larger system. Speeding up code creation doesn’t solve deeper issues like unclear requirements, poor architecture, or slow feedback loops, and in some cases, it can amplify dysfunction when the system itself is flawed.

Engineers remain fully responsible for what they ship, regardless of how the code is written. The real opportunity is to increase team capacity and deliver value faster, not to reduce cost or inflate output metrics.

The bigger risk lies in how senior leaders respond to the hype. When buzzwords instead of measurable outcomes drive expectations, focus shifts to the wrong problems. AI is a powerful tool, but progress requires leadership that stays grounded, focuses on system-wide improvement, and prioritizes accountability over appearances.

A team member recently shared Writing Code Was Never the Bottleneck by Ordep. It cut through the noise. Speeding up code writing doesn’t solve the deeper issues in software delivery. That article echoed what I’ve written and experienced myself. AI helps, but not where many think it does, “currently”.

This post builds on my earlier post, Leading Through the AI Hype in R&D That post challenged hype-driven expectations. This one continues the conversation by focusing on responsibility, measurement, and real system outcomes.

Code Implementation Is Rarely the Bottleneck

Tools like Copilot, Claude Code, Cursor, Devon, … can help developers write code faster. But that’s not where most time is lost.

Delays come from vague requirements, missing context, architecture problems, slow reviews, and late feedback. Speeding up code generation in that environment doesn’t accelerate delivery. It accelerates dysfunction.

I Use AI in My Work

I’ve used agentic AI and tools to implement code, write services, and improve documentation. It’s productive. But it takes consistent reviews. I’ve paused, edited, and rewritten plenty of AI-generated output.

That’s why I support adoption. I created a tutorial to help engineers in my division learn to use AI effectively. It saves time. It adds value. But it’s not automatic. You still need structure, process, and alignment.

Engineers Must Own Impact, Not Just Output

Using AI doesn’t remove responsibility. Engineers are still accountable for what their code does once it runs.

They must monitor quality, performance, cost, and user impact. AI can generate a function. But if that function causes a spike in memory usage or breaks under scale, someone has to own that.

I covered this in Responsible Engineering: Beyond the Code – Owning the Impact. AI makes output faster. That makes responsibility more critical, not less. Code volume isn’t the goal. Ownership is.

Code Is One Step in a Larger System

Software delivery spans more than development. It includes discovery, planning, testing, release, and support. AI helps one step. But problems often live elsewhere.

If your system is broken before and after the code is written, AI won’t help. You need to fix flow, clarify ownership, and reduce friction across the whole value stream.

Small Teams Increase Risk Without System Support

Some leaders believe AI allows smaller teams to do more. That’s only true if the system around them improves too.

Smaller teams carry more scope. Cognitive load increases. Knowledge becomes harder to spread. Burnout rises.

Support pressure also grows. The same few experts get pulled into production issues. AI doesn’t take the call. It doesn’t debug or triage. That load falls on people already stretched thin.

When someone leaves, the risk is bigger. The team becomes fragile. Response times are slow. Delivery slips.

The Hard Part Is Not Writing the Code

One of my engineers said it well. Writing code is the easy part. The hard part is designing systems, maintaining quality, onboarding new people, and supporting the product in production.

AI helps with speed. It doesn’t build understanding.

AI Is a Tool. Not a Strategy

I support using AI. I’ve adopted it in my work and encourage others to do the same. But AI is a tool. It’s not a replacement for thinking.

Use it to reduce toil. Use it to improve iteration speed. But don’t treat it as a strategy. Don’t expect it to replace engineering judgment or improve systems on its own.

Some leaders see AI as a path to reduce headcount. That’s short-sighted. AI can increase team capacity. It can help deliver more features, faster. That can drive growth, expand market share, and increase revenue. The opportunity is to create more value, not simply lower cost.

The Metrics You Show Matter

Senior leaders face pressure to show results. Investors want proof that AI investments deliver value. That’s fair.

The mistake is reaching for the wrong metrics. Commit volume, pull requests, and code completions are easy to inflate with AI. They don’t reflect real outcomes.

This is where hype causes harm. Leaders start chasing numbers that match the story instead of measuring what matters. That weakens trust and obscures the impact.

If AI is helping, you’ll see a better flow. Fewer delays. Faster recovery. More predictable outcomes. If you’re not measuring those things, you’re missing the point.

AI Is No Longer Optional

AI adoption in software development is no longer a differentiator. It’s the new baseline.

Teams that resist it will fall behind. No investor would approve a team using hammers when nail guns are available. The expectation is clear. Adopt modern tools. Deliver better outcomes. Own the results.

What to Focus On

If you lead AI adoption, focus on the system, not the noise.

  • Improve how work moves across teams
  • Reduce delays between steps
  • Align teams on purpose and context
  • Use AI to support engineers, not replace them
  • Measure success with delivery metrics, not volume metrics
  • Expect engineers to own what they ship, with or without AI

You don’t need more code. You need better outcomes. AI can help, but only if the system is healthy and the people are accountable.

The hype will keep evolving. So will the tools. But your responsibility is clear. Focus on what’s real, what’s working, and what delivers value today.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Clark, Phil. Leading Through the AI Hype in R&D. Rethink Your Understanding. July 2025. Available at: https://rethinkyourunderstanding.com/2025/07/leading-through-the-ai-hype-in-rd
  2. Ordep. Writing Code Was Never the Bottleneck. Available at: https://ordep.dev/posts/writing-code-was-never-the-bottleneck
  3. Clark, Phil. Responsible Engineering: Beyond the Code – Owning the Impact. Rethink Your Understanding. March 2025. Available at: https://rethinkyourunderstanding.com/2025/03/responsible-engineering-beyond-the-code-owning-the-impact

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Metrics, Product Delivery, Software Engineering

Leading Through the AI Hype in R&D

July 27, 2025 by philc

7 min read

Note: AI is evolving rapidly, transforming workflows faster than expected. Most of us can’t predict how quickly or to what level AI will change our teams or workflow. My focus for this post is on the current state, pace of change, and the reality vs hype at the enterprise level. I promote the adoption of AI and encourage every team member to embrace it.

I’ve spent the past few weeks deeply immersed in “vibe coding” and experimenting with agentic AI tools during my nights and weekends, learning how specialized agents can orchestrate like real product teams when given proper context and structure. But in my day job as a senior technology leader, the tone shifts. I’ve found myself in increasingly chaotic meetings with senior leaders, chief technology officers, chief product officers, and engineering VPs, all trying to out-expert each other on the transformative power of AI on product and development (R&D) teams.

The energy often feels like a pitch room, not a boardroom. Someone declares Agile obsolete. Another suggests we can replace six engineers with AI agents. A few toss around claims of “30× productivity.” I listen, sometimes fascinated, often frustrated, at how quickly the conversation jumps to conclusions without asking the right questions. More troubling, many of these executives are under real pressure from investors and ownership to show ROI. If $1M is spent on AI adoption, how do we justify the return? What metrics will we use to report back?

Hearing the Hype (and Feeling the Exhaustion)

One executive confidently declared, “Agile and Lean are dead,” citing the rise of autonomous AI agents that can plan, code, test, and deploy without human guidance. His opinion echoed a recent blog post, Agile Is Dead: Long Live Agentic Development, which criticized Agile rituals like daily stand-ups and sprints as outdated and encouraged teams to let agents take over the workflow¹. Meanwhile, agile coaches argue that bad Agile, not Agile itself, is the real problem, and that AI can strengthen Agile if applied thoughtfully.

The hype escalates when someone shares stories of high-output engineering from one of the senior developers, keeping up with AI capabilities: 70 AI-assisted commits in a single night, barely touching the keyboard. Another proposes shrinking an 8-person team to just two engineers, one writing prompts and one overseeing quality, as the AI agents do the rest. These stories are becoming increasingly common, especially as research suggests that AI can dramatically reduce the number of engineers needed for many projects². Elad Gil even claimed most engineering teams could shrink by 5×–10×.

But these same reports caution against drawing premature conclusions. They warn that while AI enables productivity gains, smaller teams risk creating knowledge silos, reduced quality, and overloading the remaining developers². Other sources echo this risk: Software Engineering Intelligence (SEI) tools have flagged increased fragility and reduced clarity in AI-generated code when review practices and documentation are lacking³.

What If We’re Already Measuring the Right Things?

While executives debate whether Agile is dead, I find myself thinking: we already have the tools to measure AI’s impact, we just need to use them.

In my organization’s division, we’ve spent years developing a software delivery metrics strategy centered on Value Stream Management, Flow Metrics, and team sentiment. These metrics already show how work flows through the system, from idea to implementation to value. They include:

  • Flow metrics like distribution, throughput, time, efficiency, and load
  • Quality indicators like change failure rate and security defect rate
  • Sentiment and engagement data from team surveys
  • Outcome-oriented metrics like anticipated outcomes and goal (OKR) alignment

Recently, I aligned our Flow Metrics with the DX Core 4 Framework⁴ matrix, organizing them into four key categories: speed, effectiveness, quality, and impact. We made these visual and accessible, using this clear chart to show how each metric relates to delivery health. These metrics don’t assume Agile is obsolete or that AI is the solution. They track how effectively our teams are delivering value.

So when senior leaders asked, “How will we measure AI’s impact?” I reminded them, we already are. If AI helps us move faster, we’ll see it in flow time. If it increases capacity, we’ll see it in throughput (flow velocity). If it maintains or improves quality, our defect rates and sentiment scores will reflect that. The same value stream lens that shows us where work gets stuck will also reveal whether AI helps us unstick it.

Building on Existing Metrics: The AI Measurement Framework

Instead of creating an entirely new system, I layered an existing AI Measurement Framework on top of our existing performance metrics⁵. This format includes three categories:

  1. Utilization:
    • % of AI-generated code
    • % of developers using AI tools
    • Frequency of AI-agent use per task
  2. Impact:
    • Changes in flow metrics (faster cycle time)
    • Developer satisfaction or frustration
    • Delivered value per team or engineer
  3. Cost:
    • Time saved vs. licensing and premium token cost
    • Net benefit of AI subscriptions or infrastructure

This approach answers the following questions: Are developers using AI tools? Does that usage make a measurable difference? And does the difference justify the investment?

In a recent leadership meeting, someone asked, “What percentage of our engineers are using AI to check in code?” That’s an adoption metric, not a performance one. Others have asked whether we can measure AI-generated commits per engineer to report to the board. While technically feasible with specific developer tools, this approach risks reinforcing vanity metrics that prioritize motion over value. Without impact and ROI metrics, adoption alone can lead to gaming behavior, and teams might flood the system with low-value tasks to appear “AI productive.” What matters is whether AI is helping us delivery better, faster, and smarter.

I also recommend avoiding vanity metrics, such as lines of code or commits. These often mislead leaders into equating motion with value. Many vendors boast “AI wrote 50% of our code,” but as developer-experience researcher Laura Tacho explains, this usually counts accepted suggestions, not whether the code was modified, deleted, or even deployed.⁵ We must stay focused on outcomes, not outputs.

The Risk of Turning AI into a Headcount Strategy

One of the more concerning trends I’m seeing is the concept of “headcount conversion,” which involves reducing team size and utilizing the savings to fund enterprise AI licenses. If seven people can be replaced by two and an AI license, along with a premium token budget, some executives argue, then AI “pays for itself.” However, this assumes that AI can truly replace human capability and that the work will maintain its quality, context, and business value.

That might be true for narrow, repeatable tasks, or small organizations or startups struggling with costs and revenue. But it’s dangerous to generalize. AI doesn’t hold tribal knowledge, coach junior teammates, or understand long-term trade-offs. It’s not responsible for cultural dynamics, systemic thinking, or ethical decisions.

Instead of shrinking teams, we should consider expanding capacity. AI can help us do more with the same people. Developer productivity research indicates that engineers typically reinvest AI-enabled time savings into refactoring, enhancing test coverage, and implementing cross-team improvements², which compounds over time into stronger, more resilient software.

Slowing Down to Go Fast

Leaving those leadership meetings, I felt a mix of energy and exhaustion. Many people wanted to appear intelligent, but few were asking thoughtful questions. We were racing toward solutions without clarifying what problem we were solving or how we’d measure success.

So here’s my suggestion: Let’s slow down. Let’s agree on how we’ll track the impact of AI investments. Let’s integrate those measurements into systems we already trust. And let’s stop treating AI as a replacement for frameworks that still work; instead, let’s use it as a powerful tool that helps us deliver better, faster, and with more intention.

AI isn’t a framework. It’s an accelerator. And like any accelerator, it’s only valuable if we’re steering in the right direction.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Leschorn, J. (2025, May 29). Agile Is Dead: Long Live Agentic Development. Superwise. https://superwise.ai/blog/agile-is-dead-long-live-agentic-development/
  2. Ameenza, A. (2025, April 15). The New Minimum Viable Team: How AI Is Shrinking Software Development Teams. https://anshadameenza.com/blog/technology/ai-small-teams-software-development-revolution/
  3. Circei, A. (2025, March 13). Measuring AI in Engineering: What Leaders Need to Know About Productivity, Risk and ROI. Waydev. https://waydev.co/ai-in-engineering-productivity-risk-roi/
  4. Saunders, M. (2025, January 6). DX Unveils New Framework for Measuring Developer Productivity. InfoQ. https://www.infoq.com/news/2025/01/dx-core-4-framework/
  5. GetDX. (2025). Measuring AI Code Assistants and Agents. DX Research. https://getdx.com/research/measuring-ai-code-assistants-and-agents/

Filed Under: Agile, AI, Delivering Value, DevOps, Engineering, Leadership, Lean, Metrics, Product Delivery, Software Engineering, Value Stream Management

Bets, Budgets, and Reframing Software Delivery as Continuous Discovery

June 7, 2025 by philc

8 min read

This post is a follow-up to my articles on estimation and product operating models, exploring how adaptive investment, value discovery, and team ownership align with Vasco Duarte’s call for agility beyond the team.

In my earlier posts, “Software Delivery Teams, Deadlines, and the Challenge of Providing Reliable Estimates” and “How Value Stream Management and Product Operating Models Complement Each Other”, I explored two core challenges that continue to hold organizations back: the illusion of predictability in software estimation, and the inefficiency of funding work through rigid project-based models. I argued that software delivery requires a shift toward probabilistic thinking, value stream alignment, and investment in products and initiatives, not fixed-scope, time-bound projects.

Software implementation and delivery estimations have been a constant theme throughout my career. Often seen as a mix of art and science, they remain highly misunderstood. While teams tend to dread them, organizations rely on them for effective planning. Despite their contentious nature, software estimations are an essential part of the process, sparking countless articles, discussions, and debates in the industry. I’m not arguing against estimation or planning. Organizations must plan. Leaders need to make investment decisions, prioritize resource allocation, and create financial forecasts. That doesn’t change. What does need to change is how we think about estimates, how we communicate their confidence, and how we act on the signals that follow.

This is a nuance that can be hard to understand unless you’ve lived both sides, delivering software inside an Agile team and leading business decisions that depend on forecasts. Estimates aren’t the enemy. One lesson I’ve learned, and others often mention, is that the real issue lies in how rigidly we stick to assumptions and how slow we are to adjust them when real-world complexities arise. What we need is to improve both how the business relies on estimates and how delivery teams develop the capability to estimate, update, and communicate confidence levels over time.

A team member recently shared notes from an Agile meetup featuring Vasco Duarte’s talk, “5 Things Destroying Your Ability to Deliver, Even When You’re Agile.” While I didn’t attend the talk, I’ve followed Vasco’s work for years. The talk referenced a 2024 podcast episode of his on Investing in Software1, which I hadn’t listened to yet, until now. That episode inspired this follow-up article.

In this episode, Vasco highlights an important point: traditional project management, often seen in boardrooms and annual plans, is based on a flawed assumption that we can predict outcomes weeks in advance and expect nothing to change. Software development, much like the weather, is unpredictable and chaotic.

Even today, many people treat software estimates as if they were comparable to predicting timelines for manufacturing physical products or managing predictable projects, such as constructing a house or bridge. They expect precision, often clinging to the initial estimate as an unyielding benchmark and holding teams accountable to it. However, software development is an entirely different realm. It’s invisible, knowledge-driven work filled with unknowns and unpredictability. In complex systems, even a small input change can trigger dramatically different outcomes. We’ve all encountered the “simple request” that unexpectedly spiraled into a significant architectural overhaul. I appreciate how Vasco ties this to Edward Lorenz’s 1961 discovery that small changes in initial conditions can lead to drastically different outcomes in weather models. That idea became the foundation of chaos theory.

Sound familiar?

In software development, we refer to this as “new work with unknowns,” “technical debt,” “rewrite,” or “refactor.” But we rarely treat it with the same respect we give to unknowns in other disciplines. Instead, we often pretend we know what we’re doing, and then demand that others commit to it. That’s the real chaos.

In addition to my focus on probability-based estimations and the Product Operating Model, Vasco’s four-point manifesto supports a shift I’ve long advocated for in team estimates and product leadership. It encourages an approach to software delivery that prioritizes adaptability, relies on real-time feedback, and views investment as an ongoing process rather than a one-time decision. This mindset isn’t about removing unpredictability but about working effectively within it.

1. From Estimates to Bets: Embracing Uncertainty with Confidence

Vasco encourages us to think like investors, not project managers. Investors expect returns, but they also accept risk and uncertainty. They recognize that not every bet pays off, and they adjust their approach accordingly based on the feedback they receive. This mindset aligns closely with how I’ve approached probabilistic estimation.

In knowledge work, “unknown unknowns” aren’t the exception. They’re the norm. You don’t just do the work, you learn what the work is along the way. What appears simple on the surface may uncover deep design flaws, coupling, or misalignment. That’s why I advocate for making estimates that improve over time, where confidence and learning signals are more important than arbitrary story point velocity.

Instead of forcing Certainty, we can ask:

“How confident are we right now?”

“What would increase or decrease that confidence?”

“Are we ready to double down, or should we pause and reassess?”

That’s what makes it a bet. And bets are revisited, not rubber-stamped.

2. Budgeting for Change, Not Certainty

The second point in Vasco’s manifesto hits close to home: fund software like you invest in the stock market, bit by bit, adjusting as you go. This reinforces what I wrote in my product operating model article: modern organizations must stop budgeting everything up front for the year and assuming the original plan will hold.

Annual planning works for infrastructure, but not innovation and knowledge work.

In a product-based funding model, teams are funded by their value stream or product, not their project deliverables estimated or guessed over a year. They receive investment to continuously discover, deliver, and evolve, reassessing value rather than completing a fixed set of scope under a dated estimate. This model gives you flexibility: invest more in what’s working, cut back where it’s not, and shift direction without resetting your entire operating plan.

3. Experiments Are the New Status Report

Vasco’s third point is deceptively simple: experiment by default. But what he’s talking about is creating adaptive intelligence at the portfolio level, not just team agility.

When we fund work incrementally and view features or epics as bets, we need signals to tell us whether to continue. In our organization, that signal often comes in the form of experiments, lightweight tests, spikes, MVPs, or “feature toggles” that generate fast feedback.

These aren’t just engineering tactics. They’re governance mechanisms.

When teams experiment, they reduce waste, increase alignment, and surface learning early. But more importantly, they feed information back into the portfolio process. A product manager might learn that a new feature doesn’t solve the core problem. A tech lead might identify a performance bottleneck before it becomes a support nightmare. A value stream might kill a half-funded initiative before it eats more cost.

Experiments give you clarity. Gantt charts provide you with theater.

4. End-to-End Ownership Enables Real Agility

The fourth point in Vasco’s manifesto is about end-to-end ownership, and it resonates deeply with how our teams are structured. When teams own their products from idea to delivery to operation, they don’t just ship; they deliver. They learn, adapt, and inform the next bet.

This kind of ownership isn’t a luxury, it’s a prerequisite to agility at scale.

In our transition to a product operating model, we restructured our delivery teams to align with value streams. We gave them clarity of purpose, full-stack capability, and autonomy to act. But what we hope to get in return isn’t just faster output; it’s better signals.

Teams close to the work produce insights you can trust. Teams trapped in delivery factories or matrixed dependencies can’t.

The Three Ways Still Apply

Listening to Vasco’s manifesto again, I was struck by how strongly it aligns with a set of principles we’ve had since at least 2021: The Three Ways, as described by Gene Kim and coauthors in The DevOps Handbook.

  • The First Way emphasizes flow and systems thinking, focusing on how value moves across the entire stream, not just within teams or silos.
  • The Second Way amplifies feedback loops, not just testing or monitoring, but real signals about whether we’re solving the right problems.
  • The Third Way advocates for a culture of continuous experimentation and learning. Accepting uncertainty, embracing risk, and using practice to gain mastery.

These are all still relevant today. But what often goes unspoken is that these principles must extend beyond the delivery teams. They must engage in planning, budgeting, prioritization, and governance.

Vasco’s idea of funding software like investments and treating initiatives as “bets” highlights the need to strengthen feedback loops across the portfolio. Experimentation has shifted from simple automated testing to focusing on strategic funding and continuous learning. Similarly, flow isn’t just about deployment pipelines anymore; it’s about speeding up the process from business decisions to tangible, measurable results.

If we’re going to embrace agility across the business truly, we must apply the Three Ways at every level of the system, especially where strategy meets funding and planning.

The Real Work, Planning for Chaos, Leading with Signals

Here’s where I’ll close, echoing Vasco’s message: the fundamental constraint in software isn’t at the team level. It’s at the leadership level, where we cling to project thinking, demand estimates without context, and build plans on the illusion of Certainty.

I strongly advocate for incorporating confidence levels and probability estimations in our organization. However, we operate on an annual funding model, planning the entire year’s operating plan, including product development investments, in advance. I hope to eventually work with product-funded budgets instead. Only time will tell. However, we can still evaluate our product development investments as we go and adjust our direction if needed.

To effectively lead a modern software organization, treat funding like an investor, not a contractor. Measure progress based on learning, not hitting milestones. Enable teams to provide actionable insights, not just reports. Structure governance models around value-driven feedback, not activity tracking.

Because you’re no longer managing projects, you’re managing bets in a chaotic system. And the sooner we stop pretending otherwise, the better our outcomes will be.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Duatre, Vasco (Host). “Xmas Special: Investing in Software: Alternatives To Project Management For Software Businesses.” December 27, 2024. Scrum Master Toolbox Podcast: Agile storytelling from the trenches [Audio podcast].  Apple Podcasts, https://podcasts.apple.com/us/podcast/scrum-master-toolbox-podcast-agile-storytelling-from/id963592988

Related Articles

  1. Software Delivery Teams, Deadlines, and the Challenge of Providing Reliable Estimates”. Phil Clark. rethinkyourunderstanding.com
  2. “How Value Stream Management and Product Operating Models Complement Each Other”. Phil Clark. rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Leadership, Product Delivery, Software Engineering, Value Stream Management

When Team Structure Collides with Role Alignment

May 26, 2025 by philc

How Merging Engineering Models Can Disrupt What Works And What to Do About It

11 min read

After a recent merger, I was asked to advise an engineering organization that needed to align two very different delivery models.

One part of the organization used small, long-term, cross-functional teams with distributed leadership (self-managed). The other followed a traditional Engineering Manager (EM) model, where one manager handled people, delivery, and agile practices. The company wanted to unify job responsibilities, eliminate performance ambiguity, and ensure fair development opportunities across all teams. The executive leader of the larger organization articulated a clear vision: one company with a single, thoughtfully designed career path built on a foundation of care and respect.

These are worthy goals. I’ve helped lead engineering through nine acquisitions and know firsthand the importance of consistent titles and expectations. But I’ve also learned something else:
“Aligning job titles and responsibilities without fixing team design, architecture, role responsibilities, and delivery structure doesn’t solve the real issues. It just hides them and creates tension and career friction across the division.”

It’s not about being right. It’s about being aligned.

Alignment takes time, planning, and honest conversation.

I’m aligned with the executive leader’s vision: to unify as one company with a shared career path, achieved with care, not urgency. Whether that takes six months or a year and a half, the focus should be on clarity and collaboration, rather than speed.

The real challenge isn’t just structural, it’s cultural. Within the larger organization’s strong-willed leadership team, they have not worked within a self-managed team structure. Fixed perspectives can stall progress if we don’t create space to explore why the models differ, not just how they do. We need to identify the root causes of the structural divergence and assess the potential risks to team culture, autonomy, and product alignment, particularly for high-performing, self-managed teams.

Another point of the executive leader is that integration shouldn’t be imposed; it should evolve at the pace of shared understanding. Once we reach that point, we owe it to the teams to communicate with clarity before information leaks and assumptions or uncertainty take hold. The real challenge arose from the other senior leaders within the group. I won’t say which model is better, as it depends on the context. Instead this article explores the challenges that can occur when we centralize accountability and responsibilities without considering the unique context. It also looks at how well-meaning integration efforts can unintentionally disrupt high-performing teams.

Why This Matters: Fairness vs. Fit

After a merger or acquisition, it’s natural and smart for engineering leaders to unify role definitions, career paths, and performance frameworks. Inconsistent job titles and responsibilities across similar roles can create confusion, slow promotions, and introduce bias. If two managers hold the same title but lead very different types of teams, performance expectations become subjective. That’s not fair to them or to the engineers they support.

So, I understood the goals of the integration effort:

  • Establish unified job responsibilities across teams
  • Minimize churn, ensuring no team member feels alienated or unsupported during the transition
  • And maintain high-performing teams that can support product delivery and operational efficiency

The goals weren’t the problem. The real challenge was the implementation.

How can you use a shared career framework when team structures and responsibilities differ?

The difference in team design and responsibilities is where the challenges of friction and finding solutions began to emerge.

Two Team Models in Contrast

The Engineering Manager Model

In the parent or acquiring organization’s Engineering Manager (EM)- led structure, a single person is responsible for managing people, overseeing delivery, driving agile practices, and partnering with products. EMs are accountable for both team output and individual performance and development. In many cases, they also serve as the technical lead.

Each Engineering Manager (EM) typically works directly with a team of 6-10 softwar engineers. The team does not have a Scrum Master or Agile Coach; the EM is responsible for Agile accountability. Similarly, there is no dedicated QA team member, so quality accountability falls on the EM and the software engineers.

This EM model was framed as a version of the “Iron Triad” or “Iron Triangle,” centered on Engineering, Product, and (presumably) UX or Delivery. However, in practice, the Engineering Manager often became the default source of team process, performance, and planning.

This structure isn’t inherently wrong. It works best when:

  • Teams are large and need strong coordination
  • The architecture is monolithic or tightly coupled
  • Product and engineering require direct managerial alignment

However, when scaled broadly or applied without nuance, it can quickly lead to role overload and reliance on individuals rather than systems to drive outcomes.

The Self-Managed Cross-Functional Model

The smaller teams in the acquired organization followed a different model entirely. These were long-lived, cross-functional teams of 8 to 12 people, including 2-4 software engineers, 1 QA, 1 product manager/owner, and in many cases, agile delivery leads or scrum masters. They had everything they needed to deliver software without needing to coordinate with other teams in most cases.

In this structure:

  • Responsibilities are distributed across roles instead of consolidated under a single leader.
  • Engineering Managers exist—but act primarily as career coaches and mentors, not team leads.
  • Agile delivery is facilitated by dedicated Scrum Masters or Agile Leaders embedded in the team.
  • Managers typically oversee 5 to 7 engineers across multiple teams and contribute technically as ICs when appropriate.

These teams naturally align with micro services, subdomains, or product value streams. They work well when the architecture allows for autonomy, and the organization invests in clarity, trust, and lightweight governance.

The acquired organization structured its teams to align with clear architectural boundaries, with each team focused on a specific subdomain or service. This approach made the teams both cross-functional and architecturally cohesive, reflecting Conway’s Law by ensuring the team structure matched the design of the software.

Key Difference: Accountability Consolidation

Both models contain the same essential responsibilities: engineering, product collaboration, quality, and delivery. However, in one, accountability is centralized under a manager, while in the other, it is distributed across the team.

The solution isn’t just about structure. It’s about how tightly the team model mirrors the system it’s building.

Conway’s Law tells us that our software systems mirror our organizational communication structures. When architecture is monolithic or tightly integrated, it makes sense to have centralized accountability. But when architecture is modular and service-oriented, teams that map directly to system boundaries, are small, autonomous, and aligned to subdomains can accelerate delivery and reduce coordination overhead.

And structure doesn’t just affect outcomes, it shapes culture.

In centralized models, decision-making authority and responsibility often rest with the Engineering Manager. This can bring clarity, especially for early-career engineers or less mature teams. But it can also reduce autonomy or create learned dependence, where teams hesitate to act without explicit approval.

In distributed models, autonomy is expected, and with it, psychological safety becomes critical. Teams must feel trusted to make decisions, fail safely, and adjust course without manager intervention. When done well, this fosters ownership and speed. However, without strong role clarity, trust, and support systems can lead to confusion or misalignment.

So, while the surface question is, “What does the Engineering Manager own?” the deeper question is, “Does the team structure support the system architecture and the culture you want to build?”

Where It Breaks: Role Titles vs. Role Expectations

On paper, this integration effort was about consistency: standardizing job titles, aligning role definitions, and applying a shared career framework across teams.

In practice, that consistency masked a deeper misalignment: the same title, Engineering Manager, carried very different expectations depending on the model it came from.

In the Engineering Manager-led model:

  • The EM is accountable for people leadership, delivery, agile practice, team velocity, and technical direction.
  • There is no embedded Scrum Master or Agile Coach.
  • The EM is expected to own outcomes, from sprint or iteration health to individual growth to team throughput.

In the self-managed, cross-functional model:

  • The EM is a career manager and mentor, often contributing technically as a senior IC.
  • Agile facilitation is handled by a dedicated team member (e.g., Scrum Master, Agile Leader, Agile Delivery Manager).
  • Delivery ownership and accountability are shared across the team; no single role “owns” performance.

From the outside, both are “Engineering Managers.” But their responsibilities are fundamentally different. When performance reviews, promotion criteria, and development paths are built around the broader EM model, it disadvantages leaders from the self-managed structure or forces the organization to reshape successful teams just to fit the title.

The concern is that unifying role definitions without accounting for structural context can cause real harm.

That harm doesn’t just affect managers. It ripples through teams.

In EM-led models, where one person is accountable for delivery, agile practice, and performance metrics, teams often defer decisions upward, even when they have the skills and context to act. This dynamic can unintentionally train teams to wait for approval, eroding autonomy and making collaboration feel more performative than empowered.

By contrast, long-lived, self-managed teams tend to develop strong psychological safety over time. With clear boundaries and shared ownership, they solve problems together. However, when leadership begins redefining responsibilities around titles instead of how the team works, even these teams can start to hesitate.

Autonomy suffers not because self-managed models lack structure but because outside systems try to reimpose control where clarity already exists.

The friction isn’t theoretical. It appears in performance evaluations, hiring misalignment, and career planning confusion. Eventually, it reaches the team level where roles blur, ownership is second-guessed, and the structure that supported speed and trust begins to unravel.

Legacy Thinking and Structural Blind Spots

One of the biggest challenges in transformations like this isn’t technical. It’s cultural.

I’ve seen firsthand how legacy thinking, even well-meaning thinking, can shape decisions in ways that unintentionally resist growth. During this engagement, I saw it again.

In our initial conversation regarding team structures, an executive leader for the larger organization made the strategic decision:

“We’re not going to shift 40 teams to the self-managed model. It’s too resource-intensive. The smaller teams will need to align with our Engineering Manager model.”

In a follow-up conversation that I wasn’t part of, a VP from the larger organization said:

“I’ve been using the Engineering Manager model for most of my career. It works.”

These statements weren’t malicious. They were confident, experienced, and full of certainty.

Relying too much on past success can sometimes prevent us from seeing what fits the current situation. What worked earlier in your career or in a different system might not work now. True transformation requires more than confidence. It requires curiosity.

In yet another conversation, I heard secondhand that one of these same leaders, after our first meeting on the topic, asked:

“Has Phil ever been a software engineer?”

That question stuck with me because I wondered how my interest in how software is delivered equates to my technical expertise. While the leader challenged my background (all he had to do was look at my LinkedIn profile or ask for my resume), his comment revealed a mindset: If someone doesn’t share our experience, maybe their perspective doesn’t count.

These moments aren’t about ego. They’re about reflection, about recognizing how deeply personal experience can cloud structural objectivity. When leaders dismiss unfamiliar models because they don’t match their playbook, they don’t just reject ideas. They limit what the organization is allowed to become.

“Great leaders aren’t defined by how long they’ve done something. They’re defined by how often they’re willing to rethink it.”

What Self-Managed Teams Need to Work

To be clear, I’m not arguing that self-managed, cross-functional teams are inherently better. They only work when they’re supported intentionally.

In this case, the acquired teams didn’t stumble into autonomy. They evolved, shaped by architectural changes, growing product complexity, and deliberate investment in role clarity and delivery practices.

Self-managed teams work best when:

  • Team boundaries are aligned with system boundaries (Conway’s Law in action)
  • Each team has all the roles it needs to deliver independently: product, UX, engineering, QA, agile leadership
  • Leadership trusts the team to make decisions and solve problems
  • There are clear expectations for ownership, accountability, and feedback loops
  • The organization invests in agile coaching and systems thinking, not just delivery metrics

Autonomy is powerful, but it’s not a substitute for structure. It’s a different structure, distributed rather than centralized, but no less rigorous.

When organizations assume self-managed teams can succeed without support, they fail. But when they try to control teams that already have what they need to succeed, they risk breaking what’s working.

If you dismantle a working model to standardize roles without investing in the conditions that made those teams successful, you’re not gaining alignment; you’re sacrificing outcomes.

I see the challenge of finding the right hybrid solution, either in role responsibilities or team structure, during this transition. Only time will tell how these efforts turn out.

A Path Forward

While we started the conversation about picking one model over the other, the next set of conversations should be about understanding what each one needs to succeed and recognizing what might be lost by trying to force one to fit the other’s framework.

In this transition, I’m not advocating for a reversal of the decision. The leadership team has chosen the Engineering Manager model as the long-term structure. My role is to support that transition in a way that minimizes disruption, preserves what’s working, and honors the intent behind the change.

But that doesn’t mean copying a model wholesale. It means asking harder questions:

  • Can we implement the EM model without breaking value stream alignment or team autonomy?
  • Can we support delivery accountability without assigning an EM to every team if doing so fragments the architecture or inflates management layers?
  • Can we evolve role definitions to respect the existing strengths of self-managed teams instead of stripping them out?

I’ve noticed that the most effective organizations aren’t strict about sticking to rigid structures. Instead, they focus on designs that are fit for purpose.

Consider blending elements of both models:

  • Some teams may have embedded EMs; others may operate with distributed leadership and shared delivery ownership.
  • Agile responsibilities can be flexibly assigned based on team maturity, not hierarchy.
  • Career frameworks can accommodate different types of Engineering Managers as long as expectations are clear and fair and performance is measured in context.

You don’t need to choose between alignment and autonomy.

You need to design for both, based on the work, the system, and the people you have.

It isn’t easy; sometimes, a hybrid model might not scale perfectly. However, it’s often a better option than forcing consistency, which can harm results.

Final Reflection: Fit Over Familiarity

At the heart of this transition is a challenge I’ve seen a few times:

How do you unify an organization without undoing what’s already working?

The desire to standardize roles, expectations, and performance frameworks comes from a good place. But when titles are aligned without understanding the structural and cultural context that surrounds them, friction follows, quiet at first, then louder over time.

I’ve spent years helping engineering organizations navigate these types of changes, sometimes from the inside, sometimes as an advisor. And here’s what I’ve learned:

  • Job titles are not the problem, misaligned expectations are.
  • Structure should reflect system architecture, not management tradition.
  • Psychological safety and autonomy aren’t side effects of good teams, they’re preconditions for them.
  • Legacy success can cloud future-fit decisions, especially when we assume what worked before must work again.
  • Great teams thrive in models that are clear, intentional, and well-supported, whether they are EM-led or self-managed.

There is no perfect model. But there is such a thing as the right model for the moment, the product, and the architecture.

This integration effort isn’t just a structural change, it’s a chance to define what kind of engineering organization this will become.

If we stay curious, focus on outcomes, and respect the conditions that made teams effective to begin with, we can build a unified system that enables scale without sacrificing flow, clarity, or trust.

The outcome of this effort will depend on time and attitudes.

Key Takeaways

  • The EM and self-managed models are not interchangeable. Each comes with different responsibilities, accountability structures, and cultural implications.
  • Standardizing job titles without context can create unintended harm. Especially when one title represents two very different sets of expectations.
  • Misalignment erodes autonomy and psychological safety. Teams work best when they know where decisions live, and are trusted to make them.
  • Conway’s Law still applies. If team structure doesn’t mirror system architecture, coordination costs increase and ownership suffers.
  • A hybrid approach may be necessary. Especially in the short term, where context, maturity, and system constraints vary across teams.
  • You can support a transition while still protecting what works. Integration doesn’t have to mean erasure.

In the end, our goal is to establish clear and unified job responsibilities across teams, minimize churn, and ensure that no team member feels alienated or unsupported during the transition. We aim to build high-performing teams that can deliver on existing commitments while maintaining operational efficiency.


Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, DevOps, Engineering, Leadership, Lean, Product Delivery, Software Engineering, Value Stream Management

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Copyright © 2025 · Rethink Your Understanding

  • Home
  • Mission
  • Collaboration
  • AI
  • Posts
  • Podcast
  • Endorsements
  • Resources
  • Contact