Every Layer of Review Makes You 10x Slower — And What to Do About It | UData Blog
A new analysis shows that approval-heavy engineering cultures don't produce safer software — they just produce slower teams. Here's how leading companies are cutting review overhead without cutting quality.
A post this week made a simple, uncomfortable argument: every layer of review you add to a software delivery process roughly halves the speed of that process. Stack two or three layers — code review, design review, security review, compliance signoff — and you can easily have a 10× speed differential between teams doing the same underlying work. The post sparked one of the more substantive HN threads in recent weeks, because most engineers have felt this, but few organizations have done anything about it.
The Review Trap
Review processes are added for good reasons. A bug ships to production; someone proposes a code review gate. A security incident occurs; a security review step is inserted. A compliance audit flags an undocumented change; an approval workflow is bolted on. Each addition is locally rational. The cumulative effect is a delivery process that can take a three-hour code change three weeks to ship.
What makes this particularly insidious is that review overhead compounds invisibly. Teams measure how long code takes to write. They rarely measure how long code waits between stages. A 2025 DORA research report found that among low-performing engineering organizations, code spent an average of 68% of its lead time in wait states — queued for review, awaiting approval, blocked on a dependency — and only 32% being actively worked on. The bottleneck wasn't engineering capacity. It was process friction.
High-performing organizations in the same study showed the inverse: 70%+ of lead time was active work, with review cycles measured in hours rather than days. The difference wasn't the quality of engineers. It was the architecture of their review processes.
Why More Reviews Don't Mean Better Software
The intuition behind approval layers is that more eyes catch more problems. This is partially true and mostly not actionable at the rate most teams apply it. Research on code review effectiveness consistently shows diminishing returns above one thorough reviewer. A 2024 Microsoft Research study found that PRs with more than two reviewers showed no statistically significant improvement in defect rates compared to PRs with a single qualified reviewer — but showed a 40% increase in time-to-merge.
The quality argument for additional review layers holds in specific, high-stakes contexts: safety-critical systems, regulated financial transactions, irreversible infrastructure changes. In those cases, slower is the correct tradeoff. The problem is that most review culture doesn't discriminate — the same approval workflow is applied to a configuration change and a database schema migration, to a copy edit and a core authentication refactor.
When every change is treated as high-stakes, the cost of that treatment accumulates across thousands of low-stakes changes per year. The safety benefit on the rare high-risk change is real. The velocity cost on the majority of routine changes is also real, and it compounds.
What Fast Teams Actually Do Differently
The engineering organizations shipping fastest in 2026 haven't eliminated review — they've made it asymmetric. Different change types get different review weights. The underlying principle: review intensity should be proportional to the reversibility and blast radius of the change.
Automated checks for the routine majority: Linting, formatting, test coverage, dependency audits, and security scanning run automatically on every commit. These don't require human review time because they're consistent and fast. Any change that passes automated checks and touches only isolated, well-tested code can merge with a single reviewer or, in some contexts, none.
Single qualified reviewer for most changes: Feature changes, bug fixes, and routine refactors that touch non-critical paths get one reviewer — someone with context, not a mandatory queue of approvers. Review is asynchronous and expected to happen within hours, not days.
Deeper review for high-blast-radius changes: Schema migrations, security-sensitive code paths, shared infrastructure, and public API changes get explicit, synchronous review with the right experts in the room. These are the 5–10% of changes worth the overhead. Treating them differently than the other 90% is what makes fast teams fast.
Blameless post-mortems instead of pre-mortems for everything: Rather than adding approval gates every time something goes wrong, high-performing teams invest in reversibility. Feature flags, trunk-based development, fast rollback capabilities, and incremental deploys reduce the blast radius of any given change. When you can roll back in two minutes, the case for a 48-hour approval process weakens significantly.
The Organizational Dimension
Review bottlenecks are rarely purely technical. They're often cultural and structural. Approval requirements that started as reasonable safeguards become entrenched because no one owns the process end-to-end and no one has the authority to remove a step once it's in place. Engineers adapt by batching changes to amortize review overhead — which makes individual changes larger, which creates more review burden, which slows things further. The cycle reinforces itself.
Breaking out of this cycle requires someone to own the delivery process as explicitly as they own the product. That means measuring lead time, wait time, and review cycle time as first-class metrics — not just story points and sprint velocity. It means auditing review requirements annually and removing ones that no longer serve a clear purpose. And it means building the culture where an engineer can flag a broken process without it being interpreted as an attempt to dodge accountability.
Teams that do this sustainably tend to share a common characteristic: their senior engineers are invested in process quality, not just code quality. The two are connected. Slow delivery is a quality problem.
What This Means for Teams Using External Engineering Talent
Review overhead has a particular cost when working with outstaffed or contractor engineers. External engineers who submit work into a slow internal review queue are paid while they wait. If a developer submits a PR on Monday and it's reviewed on Thursday, you've paid for three days of idle time. Multiply that across a team and it erodes the cost efficiency that makes flexible staffing attractive in the first place.
The teams getting the best results from outstaffing are the ones that have already rationalized their review process before bringing in external engineers. Clear acceptance criteria, automated checks, fast async review cycles, and designated internal reviewers who are accountable for turnaround time — these aren't nice-to-haves. They're what allows external talent to deliver at the speed you're paying for.
Conversely, when an outstaffed team brings senior engineers with strong judgment, the review cycle often shortens naturally — because less rework is required. Experienced engineers front-load the thinking, write cleaner diffs, and flag risks before they reach review. The review becomes a confirmation rather than a discovery process. That's the leverage point: high-quality contributors reduce review burden even as they increase output.
How UData Helps
UData provides senior engineering talent for teams that need to move faster — and senior engineers who slow down a delivery pipeline are the wrong kind of expensive. Our engineers are selected for the judgment that reduces rework and review friction, not just the output that creates it.
We work with companies that are:
- Scaling a product and need external engineers who integrate quickly into existing review workflows
- Auditing their delivery process and need senior engineers who can identify where overhead is accumulating
- Building automation pipelines — CI, testing, deployment — that reduce the manual review surface
- Running lean teams where every engineer needs to own their changes end-to-end, without hand-holding
Fast delivery and high quality aren't in tension. They're achieved by the same discipline: clear scope, automated coverage of the routine, and human judgment applied to what actually requires it. We build teams that operate that way.
Conclusion
Adding review layers is one of the most common and least examined ways engineering organizations slow themselves down. The solution isn't to eliminate review — it's to make review proportional. Automate the routine, trust qualified engineers with a single reviewer, and reserve heavyweight process for the changes where the blast radius justifies it. Teams that get this right ship faster, with comparable or better defect rates, and with engineers who are less frustrated and more effective. The data on this is consistent. The practice of applying it is what separates fast organizations from slow ones.