Median time from pull request opened to first review across most engineering teams is around 22 hours. That number is not a speed problem — it is a symptom of three upstream causes: oversized pull requests, knowledge silos, and team workflows that treat review as optional. Fix the causes and the 22 hours collapses to under two. Treat the symptom and it returns within a quarter.
Your team is not slow because your engineers are slow. They are slow because their work sits idle.
Median time from pull request opened to first review, across most teams: 22 hours.
That is not 22 hours of review effort. It is 22 hours of a pull request sitting there, doing nothing, while the author switches to another task, while the reviewer focuses on their own PRs, while context decays.
It is also not the full cost. The 22 hours is just the first wait.
What the Number Actually Measures
Time to first review is a cycle-time metric. It measures the elapsed time between a developer marking their work ready for review and the first comment from a reviewer. It does not include: the time waiting for a second review after changes are addressed, the time to merge, the time to deploy.
In teams with healthy review practices, this number is under two hours. In elite teams, often under thirty minutes.
When the median is 22 hours, the distribution is worse. A median of 22 hours means half of pull requests wait longer than that. The 90th percentile typically runs to three to five days. For a five-day review cycle, the code is often merged by someone who no longer remembers exactly why they wrote it.
The Costs You Do Not See
A 22-hour review delay is a line item that never appears on any dashboard. But it shows up everywhere else.
Context switching tax. When a developer opens a PR and waits 22 hours for feedback, they do not sit idle. They switch to another task. When feedback arrives, they must reload the context of the original change — the design decisions, the tradeoffs, the implementation choices. Research on context switching consistently shows this costs 20-40% of effective developer capacity. You are paying for a full-time engineer. You are getting 60-80% of one.
Work-in-progress inflation. Every blocked PR drives developers to start new work. New work creates new PRs. New PRs need review. The reviewer — likely the same senior engineer already overloaded — now has a queue of four PRs waiting, each stale, each requiring a context reload before useful feedback is possible. The symptom looks like "we need more senior engineers." The cause is that each PR takes five times longer to review than it should.
Rework multiplication. The longer a change sits without review, the more the rest of the codebase shifts under it. By the time review happens, the PR may conflict with other changes, depend on APIs that moved, or violate patterns established while it waited. Merge conflicts compound.
Quality erosion. Reviews done on large, stale PRs get superficial. A reviewer faced with 400 lines of code, half of which has been through multiple revisions, will skim rather than scrutinize. The review ritual still happens. The quality gate does not.
Why Reviews Take This Long
Three common causes, and they are rarely the ones teams blame.
Cause 1: PRs are too big.
Research from Google, SmartBear, and others consistently shows the same pattern: reviewer effectiveness drops sharply after 200-400 lines of changed code. A 50-line PR gets reviewed in 10 minutes. A 500-line PR takes 90 minutes, if a reviewer commits to it — and most reviewers procrastinate on PRs that look like 90 minutes of work.
Large PRs come from long-lived feature branches. Long-lived branches come from work that was not decomposed into small, independently shippable increments before development started.
Review latency is downstream of the work decomposition problem.
Cause 2: Knowledge silos.
When only two engineers understand the payments module, every PR touching payments queues behind those two. Their review queue grows while other engineers who could review sit without work. The constraint is not review capacity in general. It is review capacity for specific code areas, concentrated in too few people.
This is an architecture and documentation problem that presents as a review bottleneck.
Cause 3: Push-based work assignment.
When each engineer has their own assigned backlog, reviewing someone else's PR feels like a distraction from "my work." The incentive structure penalizes collaboration. Every review competes with the reviewer's own deadlines, and reviews lose that competition by default.
This is a team workflow problem that also presents as a review bottleneck.
The Quality Argument Is Backwards
Most teams justify long review cycles with "we are being thorough." The data does not support this.
Code review is effective at catching specific kinds of issues: design flaws, logic errors, missed edge cases, security patterns that static analysis cannot detect. These are caught in the first 15-30 minutes of review attention, on changes under 200 lines.
Code review is ineffective at catching other kinds of issues: syntax errors, style violations, formatting inconsistencies, naming convention drift. These should be caught by automation before review happens.
Teams that use review as a catch-all quality gate — expecting reviewers to find missing tests, check for style, enforce architectural standards — spend reviewer time on problems machines should solve. Meanwhile, the actual design and logic issues get less attention because reviewers are fatigued by the time they reach them.
The first fix for slow reviews is almost never "faster reviewers." It is "automate what automation should catch, so reviewers can focus on what humans do well."
What Good Looks Like
Elite teams resolve code review in two to four hours, not twenty-two.
The patterns are consistent:
- PR size discipline. Changes under 200 lines. Under 100 is better. If a change cannot be split, the design should be discussed before coding, not after.
- Automated quality gates. Linting, formatting, unit tests, integration tests, security scans, and architecture fitness checks run on every commit. Human review starts from a clean base.
- Synchronous review where possible. Pair programming eliminates the review wait entirely — code is reviewed as it is written. Over-the-shoulder review and mob programming work similarly. Async review is the fallback, not the default.
- Reviewer SLAs as team agreements. "Pull requests are reviewed within two hours during working hours" is a codified team commitment, not an individual aspiration. Stale PRs (>24 hours) get escalated automatically.
- Knowledge distribution as a first-class concern. Any module with fewer than two qualified reviewers is flagged as a risk. Pairing on that module becomes the fastest way to fix it.
What to Measure
If you are trying to fix slow reviews, three metrics tell you more than any velocity report.
- Median time from PR opened to first review. Target: under 2 hours during working hours.
- Median PR size. Target: under 200 lines. If it is over, the upstream problem is work decomposition, not review speed.
- PR age at merge. Target: under 24 hours. Older PRs mean longer branches, more conflicts, and stale context.
These three metrics, tracked weekly, expose the pattern. Most teams see the bottleneck shift as they fix it: first PRs get smaller, then review time drops, then cycle time collapses.
Three Monday morning actions:
-
Pull review latency data from GitHub, GitLab, or Bitbucket for the last 30 days. Calculate the median. If it is above 4 hours, you have the problem — and you now have a baseline to improve against.
-
For the next week, track PR size in your team's stand-ups. Any PR over 300 lines gets a single question: could this have been split? The answer is almost always yes.
-
In the next retrospective, identify which modules concentrate review load. If any single engineer is on more than 50% of reviews for a module, that is your knowledge silo. Pair someone into it.
The Bottom Line
The 22-hour first-review time is not a review speed problem. It is a downstream symptom of three upstream problems: oversized PRs, knowledge silos, and team workflows that deprioritize review. Teams that fix review latency without addressing these causes discover the latency returns within a quarter. Teams that fix the causes discover the review bottleneck disappears — along with the invisible tax it was charging on every feature their team ships.