• Skip to primary navigation
  • Skip to main content
Rethink Your Understanding

Rethink Your Understanding

Transforming Software Delivery

  • Home
  • Purpose and Mission
  • Collaboration
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact

Engineering

AI Fluent, Fundamentally Lost

December 7, 2025 by philc

The Dual Bar for Hiring in 2026

4 min read

Last week, Gene Kim and Steve Yegge published a piece on vibe coding titled Hiring in the Age of AI: What to Interview For.1 Their central question is one every engineering leader must confront. If AI has reshaped how software is built, how should we evaluate talent today.

They argue that modern interviews must identify candidates who have embraced AI, engineers who can prompt, manage context, and direct tools toward outcomes. I agree. But this view overlaps with a concerning pattern I described in my recent article, When AI Isn’t Enough.2

We are at a crossroads where two truths coexist: AI fluency is no longer optional, but it is not enough to make someone an engineer.

The “AI Crutch” Phenomenon

In recent software engineering interviews, I’ve noticed a recurring pattern. Candidates breeze through screens using AI assistants, producing clean, working code. But the moment the conversation shifts to fundamentals, they collapse.

In one instance, a candidate couldn’t explain why they chose composition over inheritance in the code they had just generated. The code was solid, but the engineer lacked a mental model of why it worked or what would break if the requirements changed.

This was a lack of foundation. AI had become a crutch, allowing them to produce strong output while masking a hollow understanding of the system.

The Great Divergence: Acceleration vs. Noise

A pattern is emerging across the industry. Software engineering is splitting into two groups, and the results are counterintuitive.

Group 1: The Architects. Senior engineers (and those with strong instincts) are achieving massive productivity gains. They can guide AI, spot hallucinations, and explain clean architecture to the tool. For them, AI is an accelerator.

Group 2: The Prompters. Engineers without fundamentals are actually getting slower. They cannot evaluate the AI’s suggestions. When the model drifts, they lack the intuition to course-correct, turning the tool into noise rather than augmentation.

This second group creates a hidden enterprise risk: The Glass Cannon.

They build systems that look impressive and powerful but shatter under the pressure of real-world constraints. The risks are invisible at first, but devastating over time:

  • The Black Box Problem: Because they cannot explain their own output, they treat their code as a third-party library. When it breaks, recovery time skyrockets.
  • Debt at Machine Speed: They may ship features, but they generate technical debt at an accelerated rate. They cannot optimize for cloud costs, architecture, performance, resilience,or spot silent security vulnerabilities because they assume “working” means “correct.”
  • Team Burden: They shift significant pressure onto team or senior engineers who must catch flawed designs, brittle patterns, and AI driven errors during code reviews.

This shifts the cost of software development from creation (which becomes cheap) to maintenance (which becomes prohibitively expensive).

The Dual Bar for Modern Talent

Effective hiring in 2026 requires us to stop picking one lens over the other. We must test for The Dual Bar:

  1. Can the candidate reason through a problem without the aid of AI? (To ensure they aren’t building glass cannons.)
  2. Can they intentionally use AI to accelerate their work? (To ensure they remain competitive.)

We aren’t hiring for what AI might be able to do in 2030. We are hiring for what teams need to ship and maintain now. That requires a new hiring rubric.

A New Hiring Model

To surface the engineers who can think, not just the ones who can prompt, consider your interview process around these five signals:

  • Fundamentals: Test this with at least one session where AI tools are off the table. Focus on fundamentals, design reasoning, and trade-offs, not syntax recall.
  • AI Fluency: Ask them to walk through a recent AI-assisted project. How did they prompt? How did they debug model mistakes? Or have them work through a challenge in real time using AI on a shared screen.
  • Communication: In an AI world, muddled explanations lead to muddled prompts. Can they articulate technical context with precision?
  • Systems Thinking: Present a scenario with competing trade-offs (e.g., latency vs. consistency). See if they can connect decisions to the broader architecture.
  • Curiosity: Ask what they’ve experimented with in the last 90 days. Engineers thriving in this era are climbing the learning curve with intention.

Acceleration vs. Illusion

There is a fine line between acceleration and illusion. If we hire based on the wrong signals, we risk building teams with strong output but weak understanding.

The current generation of great engineers will be those who use AI as a collaborator, not a substitute for thinking. They will use these tools to amplify their strengths rather than hide their gaps.

The question every leader should ask now: Does our interview process surface the engineers who can think, or just the ones who can prompt?

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Kim, G., & Yegge, S. (2025, December 1). Hiring in the Age of AI: What to Interview For. IT Revolution. https://itrevolution.com/articles/hiring-in-the-age-of-ai-what-to-interview-for/
  2. Phil Clark. (2025, November 29). When AI Isn’t Enough. Rethink Your Understanding. https://rethinkyourunderstanding.com/2025/11/when-ai-isnt-enough/

Filed Under: AI, Engineering, Leadership, Software Engineering

When AI Isn’t Enough

November 29, 2025 by philc

Why Fundamentals Still Matter in an AI-Shaped Engineering World

6 min read

In the past year, I’ve noticed a shift in how engineering candidates present themselves. A senior director on my team recently began interviewing for a critical backfill.

On paper, the candidates were strong. In the early rounds, several performed exceptionally well, with clean solutions, fast iterations, and confident code. But once the conversation moved from what they could produce with AI to what they understood without it, everything changed.

The same candidates who looked senior-level on a coding screen suddenly struggled with composition, inheritance, architectural tradeoffs, or the purpose behind common design patterns. They weren’t nervous. They didn’t know.

And that’s when a deeper leadership question emerged, one that every software engineering leader I’ve spoken with over the past year is now wrestling with:

What does it mean to be a software engineer when AI can write much of the software?

The Illusion of Mastery

We’ve been pushing AI adoption in our organization since early 2023. Not because it was trendy, but because it was obvious where the future was heading. Over the summer, we doubled down on AI literacy, aiming to have every engineer use these tools comfortably and confidently by year’s end.

The early days were rocky. Engineers said the tools slowed them down. The suggestions lacked context. Resetting instructions became a ritual. Reviews took longer, not shorter, because the generated code wasn’t always correct; it only looked correct. That friction turned out to be a necessary phase.

Once engineers learned how to provide context, prompt effectively, and evaluate output, their productivity didn’t just improve; it multiplied. AI amplifies skill; it does not create it. And that dynamic is now playing out across many hiring pipelines.

Do Fundamentals Still Matter?

A school of thought is gaining momentum in the industry. I’ve heard it from candidates, managers, and even a few senior leaders:

“If you can ask AI the right questions, do you really need to understand the underlying concepts?”

It’s a tempting idea. AI can explain patterns. It can suggest architecture. It can generate code that appears correct and often is.

In specific roles, rapid prototyping, experimentation, and early-stage product exploration may be enough. But anyone who has owned an enterprise system knows the distinction: A proof of concept is not a production system.

In the world of prototypes, speed wins; in the world of enterprise platforms, correctness, reliability, durability, and performance win. The gap between the two is everything.

The New Hiring Reality: AI Is Distorting the Signal

AI has blurred the lines between junior and senior skill, at least at first glance.

Depending on your interview workflow, AI-assisted candidates often perform exceptionally well in early rounds. The solutions come fast. The code reads cleanly. The abstractions look polished. If you’re not paying attention, it’s easy to mistake output for understanding.

But when the conversation shifts to architecture, reasoning, debugging, or explaining why something works, the floor sometimes drops out.

This is not a candidate problem so much as an ecosystem problem. Our traditional hiring processes were not designed for a world where AI can mask gaps in foundational knowledge.

One candidate our director interviewed solved coding problems flawlessly with AI assistance, but could not explain the difference between inheritance and composition. He had mastered the tool, not the craft.

And that raises another concern, one that many CTOs and senior technology leaders now whisper privately: AI is enabling people to appear more capable than they actually are.

AI-Enabled Deception

We’re beginning to see cases where individuals use AI not just to enhance competence, but to manufacture the appearance of it.

Some candidates have used AI to pass interviews, screening rounds, and background checks, only to contribute little or no meaningful work once hired. I know of firsthand examples where someone worked just long enough to collect paychecks before disappearing.

The reality is that, in a screen-shared interview, candidates can quietly lean on second-monitor tools or even AI “whispers.” Everything looks legitimate, yet the candidate may be receiving real-time assistance you cannot detect. Our previous trust assumptions in technical interviews no longer reflect the capabilities of modern tools.

This Is Where Fundamentals Matter Again

Fundamentals matter, not out of nostalgia, but because high-performing systems demand them. Enterprise systems break in ways that require:

  • context
  • judgment
  • intuition
  • analytical reasoning
  • pattern literacy
  • understanding of failure domains
  • the ability to debug what AI got wrong

AI will increasingly diagnose issues before humans get involved. But evaluating whether the fix is correct still requires someone who understands the system beneath the abstraction.

Without fundamentals, engineers become dependent on AI. With fundamentals, engineers become exponentially more effective. That distinction is not negotiable.

Accountability Hasn’t Changed

A subtle misconception is emerging: if AI generated the code, responsibility shifts. It does not. Teams remain fully accountable for every line they push to production, AI-assisted or not. And at least for now and the near future, nothing about AI’s current capabilities changes that.

AI does not dilute ownership. AI does not absorb blame. AI does not change the duty of care.

If an engineer cannot explain the code they are committing, they are not ready to commit it. And if a team cannot reason about how a change behaves under load, in failure, or across distributed components, the team is not ready to own that system.

This isn’t theoretical. AI-generated code is already introducing subtle regressions, brittle logic, and incorrect assumptions. When teams ship code they don’t fully understand, failures become harder to diagnose and recover from.

Ambiguity around ownership is the fastest way to erode reliability.

Fundamentals preserve accountability. They allow engineers to validate, challenge, and harden AI-generated output with the same rigor expected of human-written code. Most importantly, they prevent teams from outsourcing judgment, the one responsibility no tool can assume.

In the current AI era, fundamentals serve as guardrails that keep systems reliable and teams accountable.

Rethinking What We Evaluate

If we expect engineers to use AI, and we should, then interviews must evolve to focus on what AI cannot conceal. These include architectural reasoning, debugging skills, the ability to assess and challenge AI-generated output, design intuition, system-level thinking, and the ability to explain decisions before writing code.

Engineers still need a strong command of foundational concepts that AI frequently mishandles. They must understand how data structures and algorithms affect performance and scalability, and how memory and state behave in real production environments. They should know core software design principles such as encapsulation, composition, immutability, and functional patterns, which guide how systems are structured and maintained.

They also benefit from fluency in common design patterns and the judgment to apply them responsibly. They need a clear grasp of APIs, contracts, and system boundaries, as well as how architectural choices play out in distributed, event-driven, and microservice-based environments. They must be able to reason about concurrency, consistency models, failure scenarios, and performance bottlenecks, areas where AI-generated code frequently introduces subtle bugs.

Finally, they require strong testing, debugging, and diagnostic skills. Engineers must be able to interpret logs, metrics, traces, and behavioral patterns to understand what software is actually doing rather than relying solely on what an AI claims it should do.

For now, these skills are what set high-performing, AI-capable engineers apart.

The Bottom Line

AI is transforming software development at a pace we haven’t seen since the shift from on-prem systems to the cloud. But speed introduces its own risks. Leaders must now answer a question that will define the next decade of engineering:

Do we want teams that generate code with AI, or teams that understand, validate, and elevate what AI produces?

Because in proofs of concept, AI might be enough. In enterprise systems, where durability, reliability, and trust matter, misunderstanding comes at a cost. AI is an extraordinary amplifier. Fundamentals remain the stabilizer.

Engineering organizations that insist on both will build the most resilient and competitive systems in the years ahead.

Poking Holes

I invite your perspective on my posts. What are your thoughts?.

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: AI, Engineering, Leadership, Software Engineering

The Price of Alignment

October 21, 2025 by philc

How Well-Intentioned Integration Can Undermine Modern Architecture and Team Autonomy

11 min read

Having been part of numerous acquisitions and technical due diligence efforts throughout my career, I have observed familiar patterns in mergers. Leadership evaluates architecture, the technology stack, team skills, and organizational structure to assess fit and value. I have seen how integration decisions can reinforce or erode what made an acquisition valuable.

In one recent integration, however, the process risked undoing the very capabilities that had made the acquired organization successful. Its architecture and team design, once aligned for speed and autonomy, were reshaped in ways that constrained both.

A large, established technology company, which I will call LegacyTech, acquired a smaller, fast-moving company, AgileWorks. LegacyTech’s architecture is still largely monolithic, shaped by years of prioritizing control, predictability, and reliability. Its teams are large, its operating rhythm consistent, and its management approach straightforward: one engineering manager for about eight engineers, one product manager per team, and a typical structure applied across the organization. Part of this structure reflected LegacyTech’s leadership instincts, and the other part reflected the very real need to standardize roles, expectations, and career frameworks across the combined company.

AgileWorks had spent nearly a decade transforming from a monolithic system into a microservices-based organization. Its teams were small, long-lived, cross-functional, and aligned to clearly defined domains and subdomains. Each team owned its services end to end, including logic, data, deployments, and the flow of value. They operated with local decision-making, shipped independently, and continuously improved without waiting for outside coordination.

At AgileWorks, engineering managers focused on people and career development rather than directing day-to-day delivery. Delivery teams included dedicated Agile leaders, product partners, technical leads, user experience, QA, and the skills required to deliver independently.

By the time of the acquisition, AgileWorks had become the kind of organization many aspire to be: fast, autonomous, and adaptable.

The Integration Challenge

After the acquisition, LegacyTech required all teams, both legacy and newly acquired, to adopt the same structure, reporting model, and span-of-control expectations. AgileWorks’ small, domain-aligned teams were reshaped to match LegacyTech’s organization. What seemed rational and efficient on paper quickly began influencing how work flowed in practice. Structure does not simply describe how people work. It defines how they work.

When Structure Shapes the System

Conway’s Law teaches that organizations design systems that mirror their communication structures.

At AgileWorks, small teams designed, built, and deployed small services. At LegacyTech, large teams built large systems. Both structures made sense in their respective architectures.

When the two companies merged, the system began to change, not because the code changed, but because the structure changed. Teams that once released independently now had to coordinate across domains. Engineering managers balanced conflicting priorities across multiple queues. Flow slowed. The architecture itself risked drifting toward a more coupled and synchronized model.

This was not personal or political. It was mechanics.

Understanding the Architectural Misalignment

LegacyTech was not wrong. It was simply optimized for a different world.

Its monolithic architecture enabled centralized decision-making, broad responsibilities, and larger teams. In that environment, consistency and uniform structures work well.

AgileWorks’ architecture required something different. It operated within a distributed microservice architecture with bounded contexts. Each service independently owns its logic, data, and deployment lifecycle. Because the architecture was modular, the teams were modular as well. Small teams were not a stylistic preference. They were a structural capability.

Seen through LegacyTech’s perspective, AgileWorks’ structure looked unfamiliar: more teams, fewer people per team, independent decisions, separate flows. Without curiosity about architectural context, autonomy can look like fragmentation.

Why the “Too Many SKUs” Question Misses the Point

As integration continued, a recurring question arose. “Why does AgileWorks have so many SKUs?”

To LegacyTech, the number seemed excessive. An SKU is simply an internal identifier for a product, but in AgileWorks’ system, each SKU represented a bounded context, a domain or subdomain with its own architecture, team, and flow of value.

This is the natural outcome of microservices. As Martin Fowler and Dave Farley describe them, microservices are small, autonomous, independently deployable, and aligned to a single, well-defined domain boundary. That was exactly how AgileWorks structured its system. Each SKU marked a clean separation of concerns, not a proliferation of products.

Microservices reduce dependency drag by allowing teams to work in parallel, which DORA research consistently shows is a predictor of higher performance. What LegacyTech viewed as unnecessary complexity was in fact a sign of architectural maturity.

The Hidden Cost of Consistency

Consistency creates clarity and predictability. It is appealing, especially in large organizations. But when consistency is applied without understanding architectural intent, it can silently erode the strengths the acquisition sought to preserve.

Reassigning domain-aligned teams into broader groups collapses boundaries that AgileWorks had intentionally kept separate. From the outside, the structure appears aligned. On the inside, queues form, decisions slow, and delivery suffers.

What appeared efficient became regression.

Leadership Context and Legacy Mindsets

LegacyTech’s leaders emphasized consistency across the combined organization. One executive summarized it plainly: “We cannot redesign all of our teams to match the company we acquired, so we will redesign theirs to match ours.” From their perspective, this was a practical decision.

With far more teams than AgileWorks, standardizing the smaller footprint seemed simpler and more efficient. AgileWorks’ leadership understood this dynamic well; it was the same approach they had used when integrating engineering teams during their own previous acquisitions.

LegacyTech’s desire for consistency created immediate pressure to reorganize AgileWorks’ teams. To meet these expectations while minimizing disruption, the AgileWorks department head adopted a phased hybrid approach.

Instead of dismantling domain boundaries outright, he grouped related subdomain teams under individual engineering managers until each manager had an average of eight software engineers in their hierarchy. This met LegacyTech’s span-of-control rules while protecting delivery continuity for committed roadmap work. Leadership and HR at LegacyTech formally approved the plan.

Although the plan satisfied the stated requirements, it did not align with how LegacyTech leaders mentally modeled team structure. They were accustomed to a one-engineering-manager-to-one-team pattern tied to a single codebase. Seeing multiple small subdomain teams grouped under a single engineering manager did not fit that worldview.

Despite backchannel conversations, no one approached the AgileWorks leader directly to understand why the hybrid structure existed or what it was designed to protect. Over time, this misunderstanding worked against him.

The original team design misunderstanding also set the stage for deeper structural changes. To eliminate ambiguity and fully align the organizations, AgileWorks was required to adopt LegacyTech’s management model. This meant removing the Agile Leader role from each team and shifting delivery responsibility directly to engineering managers.

The shift was more than procedural. It fundamentally redefined the role of the engineering manager at AgileWorks.

Engineering managers, who had previously focused on people development and coaching, were now accountable for day-to-day delivery, performance, coordination, and practice consistency. Their span of control increased from five to eight, requiring them to support two or sometimes three small subdomain teams, since most AgileWorks teams averaged three developers.

What had once been a role centered on enabling people quickly became one centered on directing delivery. The cognitive load increased, role boundaries blurred, and the structure that once allowed AgileWorks to move rapidly and independently became increasingly challenging to maintain.

What had once enabled flow and autonomy at the subdomain-scoped team level now introduced friction and confusion. These role changes did not remain at the organizational layer; they risked influencing how the architecture behaved.

This tension over roles and structure surfaced again in how leaders interpreted AgileWorks’ team sizes and scopes.

Team Design, Domain Thinking, and the Case for Larger Scopes

Misunderstanding also surfaced around team size and scope. AgileWorks had several small teams working within the same domain, which some LegacyTech leaders interpreted as fragmentation. The issue was not fragmentation. It was a missed opportunity to ask why the structure existed in the first place.

In Transformed, Marty Cagan describes a common pitfall in product transformations. Organizations create too many narrow teams that each own a thin slice of the product. These slices are too small to deliver real outcomes independently. Handoffs increase, dependencies grow, and accountability becomes unclear.

Cagan recommends not necessarily building bigger teams. It is to give small teams a larger scope. Increase what a team owns, not the number of people on it.

AgileWorks followed this principle. Drawing from Domain-Driven Design, Separation of Concerns, microservices, and distributed systems patterns:

  • Domains aligned to product portfolios
  • Subdomains aligned to individual products or major capabilities
  • Each team owned a full subdomain end-to-end

This structure gave every team deep domain expertise, architectural control, independent deployment capability, and clear ownership of outcomes. Small teams did not mean fragmented teams. They owned coherent, customer-facing capabilities aligned with product portfolios.

LegacyTech’s model relied on broader functional groupings within a monolithic system. Engineers were often reassigned to different teams based on capacity needs. That model works in monoliths but does not map cleanly to distributed systems where autonomy and boundary clarity matter.

Curiosity would have bridged this gap. A simple question about how AgileWorks’ teams aligned to product portfolios and subdomains might have made the structure clear and its purpose obvious.

When Experience Replaces Curiosity

A moment shared with me later captured the tension clearly. A long-tenured manager walked a LegacyTech senior leader through AgileWorks’ architecture and team structure. The leader responded, “I have been doing this for thirty years, and my playbook works. I do not understand how your organization works, nor do I care to at this point.”

This approach is a familiar leadership pattern. Playbooks shaped by years of success do not always map to new architectural contexts. One size does not fit all. Playbooks can be adjusted to match context and practices. Experience is valuable, but experience without curiosity becomes limiting. And in distributed systems, where autonomy and domain clarity matter, it can quietly become destructive.

When Naming Becomes a Surrogate for Understanding

Another unexpected friction point involved team names. Years earlier, AgileWorks allowed teams to name themselves. They chose names like Red, Blue, and Green. These names were cultural, not architectural.

Inside the organization, each team was clearly mapped to its domain, products, and value stream. Ownership was unambiguous.

Yet some LegacyTech leaders found the names confusing. Some made jokes about them. They expected teams to be self-describing, named after products. Ironically, LegacyTech’s argument contradicted itself. It also had several teams name themselves after animals, terms, and cultural references. Labels alone did not reflect product alignment on either side.

Asking a simple question, “How do these team names map to your products and domains,” would have resolved everything.

The Diligence Gap

AgileWorks’ success came not only from its code but from how its teams worked: decoupled, autonomous, and aligned with their architecture.

Ignoring that alignment risks dismantling the system that created the capability value in the first place.

You can acquire the product and organization, but if you do not understand the system that built it, changing one without respecting the other often produces unintended consequences.

When Capacity Pressure Resurrects Old Patterns

Another difference emerged in how each organization responded to capacity pressure. AgileWorks designed its teams to be long-lived. Engineers rarely moved between teams, which allowed domain expertise, trust, and ownership to develop over time.

LegacyTech worked differently. At the end of major delivery cycles, leaders reassigned engineers wherever demand was highest. Teams functioned as resource pools, flexible and frequently reshuffled. Engineers did not always have a choice, and these moves were often framed as career opportunities.

When demand exceeds supply, leaders fall back on the operating model they trust. In larger, monolithic organizations, pooling and trading engineers across teams can work because the teams themselves are large and the domains broad. But in a distributed architecture with small, domain-aligned subdomain teams of three developers, this same practice can have a significant negative impact. Removing a single engineer destabilizes the team’s knowledge base, disrupts flow, and undermines the deep domain context those teams rely on.

When people become fungible, domains may become fungible as well. When domains become fungible, ownership becomes shallow. When ownership becomes shallow, flow slows, defects rise, and quality declines.

LegacyTech was not acting in bad faith. They were relying on a model that had worked in their environment. But AgileWorks required long-lived teams because its architecture depended on them. When capacity was tight, AgileWorks moved teams to the work, not individual team members.

Why Context and Choice Determine Team Design

Modern leadership provides many frameworks to draw from: Cagan, Kersten, Team Topologies, Agile, Lean, DevOps, Value Stream Management, Product Operating Models, Scrum, XP, and Kanban. Each offers value, but none is universally correct. Their effectiveness depends entirely on the context in which they are applied.

Some leaders prefer fewer teams with broader scopes. Others prefer many small, domain aligned teams. Both approaches can succeed. Both can fail. The difference is not the model itself but the architecture, constraints, and business environment it must support.

So the question is never which team model is better. The real question is which structure fits the architecture, flow constraints, and organizational realities of this moment in time.

And even more importantly, do leaders understand why a structure existed in the first place. Most team designs are not arbitrary. They reflect hard-earned lessons about architectural boundaries, flow of work, domain ownership, operational needs, and past failures.

Curiosity is what makes these choices effective. It separates meaningful alignment from surface-level consistency. Without curiosity, even well-meaning integration decisions can erase the very patterns that made a system successful.

Closing Reflection

Through curiosity, I came to understand why LegacyTech made its choices. They were not dismissing AgileWorks’ model. They were responding to their own context, constraints, and operating history. Their decisions made sense inside the environment they knew.

This is the point.

This is not a story about who was right or wrong. It is a story about what happens when architecture and structure drift apart. When a monolithic organization acquires a microservices-driven one, success depends not only on integrating people and tools, but also on integrating understanding.

When you acquire a product, you also acquire the organizational DNA that built it. Structure, practices, team design, flow, and architecture evolve together. Change one without respecting the other, and the system will reshape itself in ways you may not expect.

Alignment is powerful when it is guided by curiosity. Curiosity turns alignment from imposition into learning. And understanding becomes the bridge between two different worlds, trying to operate as one.

Sustainable integration is not about enforcing a single model, but about recognizing the strengths each system brings and understanding how to align them without erasing what makes them effective.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


Influences and Further Reading

Established ideas from modern software architecture, team design, and flow-based organizational practices inform this article. The concepts discussed draw on Domain-Driven Design, microservices and distributed systems principles, Team Topologies, the Flow Framework, DevOps and Lean thinking, and contemporary product operating models.

Notable contributors to these bodies of work include Eric Evans, Martin Fowler, Dave Farley, Manuel Pais, Matthew Skelton, Marty Cagan, and Mik Kersten, as well as research from the DORA community (Accelerate). Their work has shaped much of today’s understanding of how architecture, team structure, and organizational context interact to influence delivery performance and long-term success.

Filed Under: Agile, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

From Two Pizzas to One: How AI Reshapes Dev Teams

October 2, 2025 by philc

Exploring how AI could reshape software teams, smaller pods, stronger guardrails, and the balance between autonomy and oversight.

7 min read

For more than two decades, Jeff Bezos’s “two-pizza team” rule has been shorthand for small, effective software teams: a group should be small enough that two pizzas can feed them, typically about 5–10 people. The principle is simple: fewer people means fewer communication lines, less overhead, and faster progress. The math illustrates this well: 10 people create 45 communication channels, while four people create just six. Smaller groups spend less time coordinating, which often leads to faster outcomes.

This article was sparked by a comment at this year’s Enterprise Technology Leadership Summit. A presenter suggested that AI could soon reshape how we think about team size. That got me wondering: what would “one-pizza teams” actually look like if applied to enterprise-grade systems where resilience, compliance, and scalability are non-negotiable?

The Hype: “Do We Even Need Developers?”

In recent months, I’ve heard product leaders speculate that AI might make developers optional. One senior product manager even suggested, half-seriously, that “we may not need developers at all, since AI can write code directly.” On the surface, that sounds bold. But in reality, it reflects limited hands-on experience with the current tools. Generating a demo or prototype with AI is one thing; releasing code into a production system, supporting high-volume, transactional workloads with rollback, observability, and compliance requirements, is another. It’s easy to imagine that AI can replace developers entirely until you’ve lived through the complexity of maintaining enterprise-grade systems.

I’ve also sat in conversations with CTOs and VPs excited about the economics. AI tools, after all, look cheap compared to fully burdened human salaries. On a spreadsheet, reducing teams of 8–12 engineers down to one or two may appear to unlock massive savings. But here again, prototypes aren’t production, and what looks good in theory may not play out in practice.

The Reality Check

The real question isn’t whether AI eliminates developers, it’s how it changes the balance between humans, tools, and team structure. While cost pressures may tempt leaders to shrink teams, the more compelling opportunity may be to accelerate growth and innovation. AI could enable organizations to field more small teams in parallel, modernize multiple subdomains simultaneously, deliver features faster, and pivot quickly to outpace their competitors.

Rather than a story of headcount reduction, one-pizza teams could become a story of capacity expansion, with more teams and a broader scope, all while maintaining the same or slightly fewer people. But this is still, to some extent, a crystal ball exercise. None of us can predict with certainty what teams will look like in three, five, or ten years. What seems possible today is that AI enables smaller pods to take on more responsibility, provided we approach this shift with caution and discipline.

Why AI Might Enable Smaller Teams

AI’s value in this context comes from how it alters the scope of work for each developer.

Hygiene at scale. Practices that teams often defer, such as tests, documentation, release notes, and refactors, can be automated or continuously maintained by AI. Quality could become less negotiable and more baked into the process.

Coordination by contract. AI works best when given context. PR templates, paved roads, and CI/CD guardrails provide part of that. But so do rule files, lightweight markdown contracts such as cursor_rules.md or claude.md that encode expectations for test coverage, security practices, naming conventions, and architecture. These files give AI the boundaries it needs to generate code that aligns with team standards. Over time, this could transform AI from a generic assistant into a domain-aware teammate.

Broader scope. With boilerplate and retrieval handled by AI, a small pod might own more of the vertical stack, from design to deployment, without fragmenting responsibilities across multiple groups.

Reduced overhead. Acting as a shared memory and on-demand research partner, AI can minimize the need for lengthy meetings or additional specialists. Coordination doesn’t disappear, but some of the lower-value overhead could shrink.

From Efficiency to Autonomy

The promise isn’t simply in productivity gains per person; it may lie in autonomy. AI could provide small pods with enough context and tooling to operate independently. This autonomy might enable organizations to spin up more one-pizza teams, each capable of covering a subdomain, reducing technical debt, delivering features, or running experiments. Instead of doing the same work with fewer people, companies might do more work in parallel with the same resources.

How Roles Could Evolve

If smaller teams become the norm, roles may shift rather than disappear.

  • Product Managers could prototype with AI before engineers write code, run quick user tests, and even handle minor fixes.
  • Designers might use AI to generate layouts while focusing more on UX research, customer insights, and accessibility.
  • Engineers may be pushed up the value chain, from writing boilerplate to acting as architects, integrators, and AI orchestrators. This creates a potential career pipeline challenge: if AI handles repetitive tasks, how will junior engineers gain the depth needed to become tomorrow’s architects?
  • QA specialists can transition from manual testing to test strategy, utilizing AI to accelerate execution while directing human effort toward edge cases.
  • New AI-native roles, such as prompt engineers, context engineers, AI QA, or solutions architects, may emerge to make AI trustworthy and enterprise-aligned.

In some cases, the traditional boundaries between product, design, and engineering could blur further into “ProdDev” pods, teams where everyone contributes to both the vision and the execution.

The Enterprise Reality

Startups and greenfield projects may thrive with tiny pods or even solo founders leveraging AI. But in enterprise environments, complexity doesn’t vanish. Legacy systems, compliance, uptime, and production support continue to require human oversight.

One-pizza pods might be possible in select domains, but scaling them down won’t be simple. Where it does happen, success may depend on making two human hats explicit:

  • Tech Lead – guiding design reviews, threat modeling, performance budgets, and validating AI output.
  • Domain Architect – enforcing domain boundaries, compliance, and alignment with golden paths.

Even then, these roles rely on shared scaffolding:

  • Production Engineering / SRE  -managing incidents, SLOs, rollbacks, and noise reduction.
  • Platform Teams – providing paved roads like IaC modules, service templates, observability baselines, and policy-as-code.

The point isn’t that enterprises can instantly shrink to one-pizza teams, but that AI might create the conditions to experiment in specific contexts. Human judgment, architecture, and institutional scaffolding remain essential.

Guardrails and Automation in Practice

For smaller pods to succeed, standards need to be non-negotiable. AI may help enforce them, but humans must guide the judgment.

Dual-gate reviews. AI can run mechanical checks, while humans approve architecture and domain impacts.

Evidence over opinion. PRs should include artifacts, tests, docs, and performance metrics, so reviews are about validating evidence, not debating opinions.

Security by default. Automated scans block unsafe merges.

Rollback first. Automation should default to rollback, with humans approving fixing forward.

Toil quotas. Reducing repetitive ops work quarter by quarter keeps small teams sustainable.

Beyond CI, AI can also shape continuous delivery by optimizing pipelines, enforcing deployment policies, validating changes against staging telemetry, and even self-healing during failures.

What’s Real vs. Wishful Thinking (2025)

AI is helping, but unevenly. Gains emerge when organizations re-architect workflows end-to-end, rather than layering AI on top of existing processes.

Quality and security remain human-critical. Studies suggest a high percentage of AI-generated code carries vulnerabilities. AI may accelerate output, but without human checks, it risks accelerating flaws.

AI can make reviews more efficient by summarizing diffs and flagging issues, but final approval still requires human judgment on architecture and risk.

And production expectations haven’t changed. A 99.99% uptime commitment still allows only 15 minutes of downtime per quarter. Even if AI can help remediate, humans remain accountable for those calls.

Practitioner feedback is also worth noting. In conversations with developers and business users of AI, most of whom are still in their first year of adoption, the consensus is that productivity gains are often inflated. Some tasks are faster with AI, while others require more time to manage context. Most people view AI as a paired teammate, rather than a fully autonomous agent that can build almost everything in one or two shots.

Challenges to Consider

Workforce disruption. If AI handles more routine work, some organizations may feel pressure to reduce the scope of specific roles. Whether that turns into cuts or an opportunity to reskill may depend on leadership choices.

Mentorship and pipeline. Junior engineers once learned by doing the work AI now accelerates. Without intentional design of new learning paths, we may risk a gap in the next generation of senior engineers.

Over-reliance. AI is powerful but not infallible. It can hallucinate, generate insecure code, or miss subtle regressions. Shrinking teams too far might leave too few human eyes on critical paths.

A Practical Checklist

  • Product risk: 99.95%+ SLOs or regulated data? Don’t shrink yet.
  • Pager noise: <10 actionable alerts/week and rollback proven? Consider shrinking.
  • Bus factor: ≥3 engineers can ship/release independently? Consider shrinking.
  • AI Maturity: Are AI Checks and PR Evidence Mandatory? Consider shrinking.
  • Toil trend: Is toil tracked and trending down? Consider shrinking.

Bottom Line

AI may make one-pizza teams possible, but only if automation carries the repetitive workload, humans maintain judgmental oversight, and guardrails ensure standards. Done thoughtfully, smaller pods don’t mean scarcity; they can mean focus.

And when organizations multiply these pods across a portfolio, the outcome might not just be sustaining velocity but accelerating it: more features, faster modernization, shorter feedback loops, and quicker pivots against disruption.

This is the story of AI in team structure, not doing the same with less, but doing more with the same.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Product Delivery, Software Engineering

So, What Does a VP of Software Engineering Do?

August 21, 2025 by philc

7 min read

This article began as a response to a colleague in the industry, Patrice Corbard, a DevOps advisor, trainer, and author in France, who asked me a simple but difficult question:

“Can you describe what you consider to be the most important jobs-to-be-done in your role as VP of Engineering, as well as the pains and gains, ranked in order of importance?”

It’s a fair question. If you search online, you’ll find plenty of job descriptions and responsibility lists. What you won’t find is a candid look at the role from the inside, what you’re accountable for, what makes the job rewarding, and what makes it challenging.

My answer to Patrice became the inspiration for this article. What follows isn’t a universal definition of the VP of Engineering role. It’s how the role has applied to me, shaped by my experience, the transformations I’ve led, the leadership cultures I’ve worked within, and the context of our change initiatives, growth, and acquisitions in a highly evolving digital industry.

When most people ask me what I do as a VP of Software Engineering, they sometimes expect a simple answer: “You lead engineers, right?”

The truth is, the role of VP of Engineering isn’t black-and-white. It depends heavily on:

  • The size and stage of the company
  • The leadership culture you operate within
  • The context of the organization

What I can share is my journey, how the role has evolved for me, what I’ve been held accountable for, and what it’s been like.

My Journey to VP of Engineering

When I joined Parchment over a decade ago, I brought over a decade of enterprise software engineering, architecture, and leadership experience.

At Parchment, the engineering teams were still operating in waterfall silos. The organization just started moving to Agile and Scrum ceremonies. Delivery was slow, fragile, and disconnected from business outcomes.

My earliest accountability as a Director of Engineering was helping engineering transform with highly skilled, passionate, open-minded team members, moving us toward Agile, Lean, and DevOps practices that allowed us to ship with confidence.

The shift wasn’t simply about adopting new frameworks, but demanded a deeper transformation. It required me, my team, and many leaders around me to learn, unlearn, and relearn. To lead effectively, I had to embody humility and set the standard through my own actions.

Over time, my role expanded as the organization scaled. What started with a few dozen engineers as a Director eventually grew to more than 175 people across 10 countries as a VP of Engineering. And with that growth, the scope of my responsibilities shifted.

Four Pillars of Accountability

Looking back, I can summarize my VP responsibilities into four enduring accountabilities:

1. Enterprise-Level Software Quality and Resilience

  • Ensure stability and reliability of delivery
  • Support automation initiatives and efforts, shorten lead times
  • Use flow metrics to measure and improve

2, People Engagement

  • Without engaged teams, delivery grinds to a halt
  • Engagement comes from psychological safety, inclusion, autonomy, purpose, and leadership that people trust

3. Retention and Development

  • Attracting great talent is hard; retaining them is harder
  • Build career frameworks, coach managers, and provide growth opportunities
  • Much of my time went into developing engineers who had just stepped into leadership

4. Skills and Capabilities

  • Keep teams competitive in today’s tech landscape
  • Don’t chase every shiny tool, but invest in learning, experimentation, and the right capabilities

Everything else I did, adopting Value Stream Management (VSM), integrating AI copilots, partnering with Product, aligning with Finance, flowed back into these four pillars.

Balancing Global Talent

Another dimension of the VP role is managing the distribution and cost of talent. U.S. hiring alone can’t always scale, so part of my responsibility was building a model that included nearshore, offshore, and local teams.

Sometimes that meant intentionally diversifying where and how we hired. Other times, it meant adapting through acquisitions in new geographies, inheriting engineering teams with their own culture, practices, and expectations.

In both cases, the challenge wasn’t just financial. It was about creating alignment across different regions, time zones, and cultures, while still building one cohesive engineering organization.

Getting this right was critical not only to scaling sustainably but also to retaining talent and protecting delivery resilience as the company grew globally.

Beyond Delivery: Transformation and Business Alignment

The VP role isn’t only about keeping the trains running.

I was deeply involved in:

  • Technical due diligence in acquisitions
  • Aligning metrics with business outcomes
  • Contributing to valuations during funding rounds and ownership changes

It also meant championing long-term transformation strategies:

  • Moving from waterfall to Agile, Lean, DevOps, and Continuous Delivery
  • Adopting Value Stream Management for end-to-end visibility
  • Driving AI literacy and adoption across engineering

AI adoption is about building a culture of learning, experimentation, and practical adoption so teams build real capability.

One truth I learned: engineering only matters if it’s connected to the growth engine of the business. Otherwise, it gets treated as a cost center.

VP of Engineering vs. CTO

I’m often asked: “What’s the difference between a VP of Engineering and a CTO?”

From my experience:

  • CTO puts technology first, people second. They set the vision, connect strategy to growth, and influence investors.
  • VP of Engineering puts people and practices first, technology second. My job is to build engaged teams and strong delivery systems so the strategy is executed at scale.

Both roles are essential. One is about what we bet on. The other is about how people and systems deliver it.

Leadership Culture Shapes the Role

Another factor that defined my journey was who I reported to and the leadership culture around me.

  • For most of my tenure, I reported to a leader who gave me autonomy and trusted me. Those years were expansive; we built team autonomy, focused on improving delivery cadence, agility, and flow, and made measurable progress.
  • When that leader retired, a new CTO arrived. He spoke Agile but led with command-and-control habits. It clashed with our progress and felt like a hand grenade in the middle of our transformation.
  • Later, after an acquisition, a VP of Product replaced the CTO and owned both Product and technology. Our philosophies diverged, but where we were aligned, in people and culture, we found common ground.

The lesson: your autonomy and alignment with peers and superiors shape the job.

One of the most underestimated jobs-to-be-done for a VP of Engineering is this: setting and sustaining long-term strategy, digital transformation, Agile, VSM, team outcomes and performance feedback, and building competitive advantage through culture and delivery. But here’s the catch: a change in senior leadership above you can accelerate that strategy, or derail it overnight.

From T-Shaped to V-Shaped Skills

A VP of Engineering can’t stand still.

Early on, I had strong T-shaped skills, depth in engineering, and breadth in adjacent areas. But to operate at the executive level, I had to develop what I call V-shaped skills: depth in engineering plus meaningful depth in several other domains.

That meant deliberate, ongoing investment in learning:

  • Scaling organizations: team topologies, value streams, spans of control
  • Strategy and OKRs: translating strategy into objectives and results
  • Funding, M&A: diligence, integration, and how maturity shows up in valuation
  • Thinking like a CEO: runway, margins, growth levers, complex tradeoffs
  • Product management: enough depth to partner with product leaders
  • Finance fluency: COGS, OPEX, ROI, metrics that tie tech to earnings
  • Modern architecture & technology: staying credible without micromanaging
  • Leadership craft: books, workshops, conferences, sharpening coaching and communication

It also meant mentoring beyond engineering. In 2024, I participated in our Women in Leadership program, coaching a developing leader. Supporting leaders outside my org was a way to invest in a broader leadership fabric.

And it wasn’t just about formal learning. My success was shaped by mentors and the network I built both inside and outside my organization. Collaborating with senior executives in other companies helped me benchmark our progress, validate practices, and learn from both successes and failures. That external perspective was invaluable in shaping my decisions and accelerating transformation.

The Hard Parts

It isn’t all bright spots.

Being VP of Engineering also meant being accountable for cost-saving measures and layoffs. Those are the darkest days, balancing empathy with business realities while protecting trust and continuity as best you can.

The Highlights

But there are bright spots, too, the moments that make the hard parts worth it.

  • Contributing to a team driving significant organizational growth.
  • Watching team members progress and grow into leaders, contributors, and mentors themselves.
  • Seeing the organization thrive and succeed because of the engineering team’s partnership with product and business.
  • Having the opportunity to mentor others, both inside and outside engineering, and know you’re investing in the company’s leadership future.
  • Helping to build a culture that makes teams proud to come to work, where people feel connected, trusted, and valued.
  • Having a direct impact on something bigger than you.

These are the outcomes that fuel purpose in the role and make the investment in people and practices pay off.

What I’ve Learned

So what does a VP of Engineering do?

  • Ensure software is reliable and resilient
  • Keep teams engaged and thriving
  • Retain and develop people with absolute growth paths
  • Invest in skills and capabilities so teams stay competitive
  • Lead transformation by learning, unlearning, and relearning
  • Align execution with business outcomes
  • Contribute to M&A, funding, and investor communication
  • Drive practices like AI adoption to build long-term capability
  • Navigate leadership cultures, reporting lines, and autonomy
  • Expand from T-shaped to V-shaped skills, supported by mentors and networks
  • Balance global talent through local hiring, nearshore/offshore models, and acquisitions in new geographies

And most of all, accept that the role is never static; it shifts as the company shifts.

Closing Thought

If you’re wondering what a VP of Engineering does, the only honest answer is: it depends.

It depends on the organization, its maturity, and the leadership culture. My story is just one version, shaped by digital transformation, scaling, global talent strategy, AI adoption, mentorship, and peer networks.

What hasn’t changed is this: the job is about building systems of delivery and leadership that last, systems that sustain people, products, and business value long after a single leader has moved on.

And remember: this is just a taste of how the VP of Engineering role has applied to me, in my organizations, and my context.


Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com

Filed Under: Engineering, Leadership, Software Engineering

AI Is Improving Software Engineering. But It’s Only One Piece of the System

July 31, 2025 by philc

5 min read

A follow-up to my last post Leading Through the AI Hype in R&D, this piece explores how strong AI adoption still needs system thinking, responsibility, and better leadership focus.

Leaders are moving fast to adopt AI in engineering. The urgency is real, and the pressure is growing. But many are chasing the wrong kind of improvement, or rather, focusing too narrowly.

AI is transforming software engineering, but it addresses only one part of a much larger system. Speeding up code creation doesn’t solve deeper issues like unclear requirements, poor architecture, or slow feedback loops, and in some cases, it can amplify dysfunction when the system itself is flawed.

Engineers remain fully responsible for what they ship, regardless of how the code is written. The real opportunity is to increase team capacity and deliver value faster, not to reduce cost or inflate output metrics.

The bigger risk lies in how senior leaders respond to the hype. When buzzwords instead of measurable outcomes drive expectations, focus shifts to the wrong problems. AI is a powerful tool, but progress requires leadership that stays grounded, focuses on system-wide improvement, and prioritizes accountability over appearances.

A team member recently shared Writing Code Was Never the Bottleneck by Ordep. It cut through the noise. Speeding up code writing doesn’t solve the deeper issues in software delivery. That article echoed what I’ve written and experienced myself. AI helps, but not where many think it does, “currently”.

This post builds on my earlier post, Leading Through the AI Hype in R&D That post challenged hype-driven expectations. This one continues the conversation by focusing on responsibility, measurement, and real system outcomes.

Code Implementation Is Rarely the Bottleneck

Tools like Copilot, Claude Code, Cursor, Devon, … can help developers write code faster. But that’s not where most time is lost.

Delays come from vague requirements, missing context, architecture problems, slow reviews, and late feedback. Speeding up code generation in that environment doesn’t accelerate delivery. It accelerates dysfunction.

I Use AI in My Work

I’ve used agentic AI and tools to implement code, write services, and improve documentation. It’s productive. But it takes consistent reviews. I’ve paused, edited, and rewritten plenty of AI-generated output.

That’s why I support adoption. I created a tutorial to help engineers in my division learn to use AI effectively. It saves time. It adds value. But it’s not automatic. You still need structure, process, and alignment.

Engineers Must Own Impact, Not Just Output

Using AI doesn’t remove responsibility. Engineers are still accountable for what their code does once it runs.

They must monitor quality, performance, cost, and user impact. AI can generate a function. But if that function causes a spike in memory usage or breaks under scale, someone has to own that.

I covered this in Responsible Engineering: Beyond the Code – Owning the Impact. AI makes output faster. That makes responsibility more critical, not less. Code volume isn’t the goal. Ownership is.

Code Is One Step in a Larger System

Software delivery spans more than development. It includes discovery, planning, testing, release, and support. AI helps one step. But problems often live elsewhere.

If your system is broken before and after the code is written, AI won’t help. You need to fix flow, clarify ownership, and reduce friction across the whole value stream.

Small Teams Increase Risk Without System Support

Some leaders believe AI allows smaller teams to do more. That’s only true if the system around them improves too.

Smaller teams carry more scope. Cognitive load increases. Knowledge becomes harder to spread. Burnout rises.

Support pressure also grows. The same few experts get pulled into production issues. AI doesn’t take the call. It doesn’t debug or triage. That load falls on people already stretched thin.

When someone leaves, the risk is bigger. The team becomes fragile. Response times are slow. Delivery slips.

The Hard Part Is Not Writing the Code

One of my engineers said it well. Writing code is the easy part. The hard part is designing systems, maintaining quality, onboarding new people, and supporting the product in production.

AI helps with speed. It doesn’t build understanding.

AI Is a Tool. Not a Strategy

I support using AI. I’ve adopted it in my work and encourage others to do the same. But AI is a tool. It’s not a replacement for thinking.

Use it to reduce toil. Use it to improve iteration speed. But don’t treat it as a strategy. Don’t expect it to replace engineering judgment or improve systems on its own.

Some leaders see AI as a path to reduce headcount. That’s short-sighted. AI can increase team capacity. It can help deliver more features, faster. That can drive growth, expand market share, and increase revenue. The opportunity is to create more value, not simply lower cost.

The Metrics You Show Matter

Senior leaders face pressure to show results. Investors want proof that AI investments deliver value. That’s fair.

The mistake is reaching for the wrong metrics. Commit volume, pull requests, and code completions are easy to inflate with AI. They don’t reflect real outcomes.

This is where hype causes harm. Leaders start chasing numbers that match the story instead of measuring what matters. That weakens trust and obscures the impact.

If AI is helping, you’ll see a better flow. Fewer delays. Faster recovery. More predictable outcomes. If you’re not measuring those things, you’re missing the point.

AI Is No Longer Optional

AI adoption in software development is no longer a differentiator. It’s the new baseline.

Teams that resist it will fall behind. No investor would approve a team using hammers when nail guns are available. The expectation is clear. Adopt modern tools. Deliver better outcomes. Own the results.

What to Focus On

If you lead AI adoption, focus on the system, not the noise.

  • Improve how work moves across teams
  • Reduce delays between steps
  • Align teams on purpose and context
  • Use AI to support engineers, not replace them
  • Measure success with delivery metrics, not volume metrics
  • Expect engineers to own what they ship, with or without AI

You don’t need more code. You need better outcomes. AI can help, but only if the system is healthy and the people are accountable.

The hype will keep evolving. So will the tools. But your responsibility is clear. Focus on what’s real, what’s working, and what delivers value today.

Poking Holes

I invite your perspective on my posts. What are your thoughts?

Let’s talk: phil.clark@rethinkyourunderstanding.com


References

  1. Clark, Phil. Leading Through the AI Hype in R&D. Rethink Your Understanding. July 2025. Available at: https://rethinkyourunderstanding.com/2025/07/leading-through-the-ai-hype-in-rd
  2. Ordep. Writing Code Was Never the Bottleneck. Available at: https://ordep.dev/posts/writing-code-was-never-the-bottleneck
  3. Clark, Phil. Responsible Engineering: Beyond the Code – Owning the Impact. Rethink Your Understanding. March 2025. Available at: https://rethinkyourunderstanding.com/2025/03/responsible-engineering-beyond-the-code-owning-the-impact

Filed Under: Agile, AI, DevOps, Engineering, Leadership, Metrics, Product Delivery, Software Engineering

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Copyright © 2026 · RYU Advisory & Media, LLC. All rights reserved.

  • Home
  • Purpose and Mission
  • Collaboration
  • Endorsements
  • Posts
  • Podcast
  • Resources
  • Contact