
In software delivery, especially in Agile environments, there is a concept known as the definition of done. At its core, it is a shared standard for what must be true before a piece of work is considered complete and ready to release. The code is written. The testing is complete. The acceptance criteria are met. The work is stable enough to move into production.
That definition mattered, and still does.
For years, in my own experience and across many organizations I have worked with, observed, or followed, software delivery teams practicing Agile have often treated the definition of done as a checkpoint tied to delivery and release.
The code is complete. The tests pass. The acceptance criteria are met. The feature is in production. Done.
That definition improved engineering discipline. It pushed teams toward better testing, stronger automation, more reliable release practices, and greater operational consistency. Those things still matter. If work cannot move safely to production, value will never move consistently either.
But production was never the true finish line. It was the handoff.
The real definition of done starts earlier and ends later. It starts with an anticipated outcome and completes only when the organization closes the loop and understands what was actually realized.
That shift changes the meaning of delivery. It changes what leaders ask for. It changes what teams learn from their work. And it changes whether technology is measured as completed activity or as a system that contributes to meaningful business and customer outcomes.
The old definition of done was necessary, but incomplete
In many environments, done still means released. A team picks up the work, builds it, tests it, ships it, and moves on. The dashboard updates. The milestone turns green. Another item leaves the board.
That approach measures motion. It can improve discipline. It can even create the appearance of progress. But it leaves a critical question unanswered: what happened because of the work?
The more important question is whether customers adopted it, whether it reduced friction, improved conversion, lowered support demand, reduced risk, improved retention, or created the behavior change that justified the investment in the first place. Too often, those realization questions never become part of the operating rhythm.
I have seen this firsthand. I have seen teams work hard, release meaningful efforts, and move on without ever being told what happened next.
I have also seen leaders make decisions with strong delivery data and weak outcome visibility. Over time, that gap shapes priorities, culture, and the way people think about the value of the work itself.
Flow and Realization answer different questions
This is why I continue to think about the problem through the lens of Flow and Realization.
Flow helps leaders see how work moves through the system. It shows where work waits, where it stalls, where dependencies create friction, and how efficiently the organization turns effort into delivery.
That visibility matters. Without it, waste hides in the system. Delays begin to feel normal. Bottlenecks remain unmanaged. Leadership decisions about funding, structure, priorities, and capacity create drag without anyone seeing the full effect.
But even strong flow does not answer every important question. An organization can improve speed, reduce friction, and deliver more predictably while still investing in work that does not meaningfully change anything.
That is where Realization matters.
Realization asks a different question: what was the actual outcome of the effort? The issue is larger than whether we delivered. The deeper question is whether the effort mattered.
The two belong together. Flow helps us understand how the system performs. Realization helps us understand whether the effort produced the intended impact.
Leaders need both perspectives to understand how value moves and whether it was actually created.
The real definition of done begins before the work begins
A better definition of done does not start at release. It starts when the work is framed.
Every meaningful effort should carry an anticipated outcome.
That does not require artificial precision, and it does not mean pretending every feature can be forecast perfectly. It means the work should enter the system with a clear reason for existing beyond the act of shipping it.
In stronger operating models, anticipated outcomes connect to a larger goal, whether that is a team objective, a value stream target, a divisional priority, or a broader organizational goal. That alignment gives the work clearer context and makes it easier to evaluate beyond the ticket itself.
This applies to more than customer-facing features. Platform work, technology investments, and technical debt reduction should also carry anticipated outcomes that leaders can connect back to a broader goal.
What are we expecting this effort to change?
That anticipated outcome might take many forms: improved adoption, reduced customer friction, fewer support cases, faster onboarding, improved conversion, reduced operational toil, lower compliance risk, faster recovery, stronger retention, or lower cost to serve. Not every effort will tie directly to revenue. Some will be operational, quality-related, or risk-based. That is fine.
What matters is that the work begins with intent.
That is a much stronger starting point than simply saying, “Someone asked for it.”
The hardest part comes after release
This is where the gap usually opens. Closing the loop is harder than shipping the work.
The signal often lags delivery. Sometimes it appears in three weeks. Sometimes three months. Sometimes longer. The data may live in different systems. The evidence may be split across analytics, operations, support, customer conversations, or business reporting. Sometimes the result is obvious. Often it is not.
All of that is real. None of it removes the responsibility.
In fact, the delay is exactly why so many organizations stop short. Once the work is in production, the energy shifts to the next commitment. The roadmap advances. Capacity is consumed. The follow-through becomes optional.
That is how teams end up shipping work without learning from it.
If nobody returns to examine what happened, the organization may improve delivery while remaining weak at investment learning. It may get faster at producing output without getting better at deciding what is worth doing.
That is why I increasingly believe closing the loop belongs in the real definition of done. It may be difficult. It is still necessary.
Production is the start of evidence, not the end of accountability
A mature organization does not treat release as the end of the story. It treats release as the point when the work becomes eligible to prove itself.
That does not mean teams wait idly for months before moving forward. It means the organization creates an intentional mechanism to come back, document the actual outcome, examine the signal, compare anticipated outcomes with actual outcomes, and learn from the difference.
That is part of responsible product leadership. It is part of responsible investment. And it is part of what separates an organization that merely delivers from one that gets smarter over time.
Too many companies have strong delivery conversations and weak learning conversations. They review throughput, cycle time, releases, and roadmap progress, yet they do not consistently review realized impact.
That is not a small omission. Over time, it compounds into repeated investment without evidence, output without understanding, and a widening disconnect between effort and value.
Teams need the loop closed too
This issue reaches far beyond the executive level. It affects teams, shapes culture, and influences whether people stay connected to their work’s purpose.
Teams are energized by purpose. Most people doing this work want to know they are building the right things. They want to know their effort had an effect. They want to know their work helped a customer, reduced friction, improved an experience, lowered risk, or moved something meaningful in the business.
When organizations fail to close the loop, teams lose that connection.
They may ship often. They may meet every delivery expectation. They may perform with discipline and professionalism. But if nobody comes back to share what actually happened after release, teams are left without one of the signals that matters most: did our work matter?
When that question goes unanswered long enough, teams can begin to feel like feature factories. Work becomes transactional. Delivery becomes detached from meaning. And even high-performing teams can lose some of the energy that comes from knowing their effort contributed to something real.
That is why Realization should not remain trapped at the leadership level. Even if leaders get better at evaluating actual outcomes, those outcomes should still be communicated back to the teams that did the work. The signal needs to travel both ways. Leaders need it to improve investment decisions. Teams need it to stay connected to purpose and sharpen their judgment.
And that is true whether the outcome was positive, mixed, or disappointing. If the effort worked, the team should know. If it underperformed, the team should know that too. The goal is learning.
Closing the loop improves investment discipline and reinforces culture at the same time. It helps teams see that their work contributes to real outcomes, not just finished deliverables.
A stronger operating model changes the conversation
When leaders bring Flow and Realization together, the conversation becomes more useful. It moves beyond release speed alone and creates space to ask better questions: what the effort was expected to change, how the work moved through the system, where friction slowed it down, what happened after release, and what those results should mean for future investment decisions.
That creates a stronger management system, one that connects delivery discipline, product thinking, and business accountability while giving teams something more meaningful than activity-based success criteria. It gives them context for why the work matters.
The real definition of done
The old definition of done was valuable, but incomplete.
A broader view of done still includes production readiness. Quality, testing, security, operational readiness, and reliable release practices remain foundational. Those things tell us whether work was delivered responsibly. They do not tell us whether the investment produced anything meaningful.
The real definition of done is broader.
A meaningful effort begins with an anticipated outcome, moves through the value stream with visible flow, reaches production with quality and operational integrity, and is later revisited to understand what was actually realized.
That is done.
Not because every answer appears immediately.
Not because every result is easy to measure.
Not because every outcome is clean.
But because the organization commits to closing the loop.
That is what turns delivery into learning.
That is what turns output into evidence.
That is what turns technology work into business intelligence.
If teams define done only as “released,” they risk confusing completion with value.
If leaders define done as “released, learned, and understood,” they create the conditions for better decisions, better investment judgment, and stronger team connection to purpose.
That is the real definition of done.
Poking holes
I invite your perspective. What do you think?
Let’s talk: phil.clark@rethinkyourunderstanding.com
Author’s note: This article reflects one of the core ideas I explore more deeply in my upcoming book, Profitable Engineering, which examines how Flow, Realization, and leadership decisions shape whether technology becomes a strategic partner to the business.