Counterproductive Code Review & Its Cures

Code review is a powerful tool - but use it wisely!

Code review! This is among the most important responsibilities of a software engineer. It’s a good thing. We all want to maintain the quality of the codebases we inhabit together, while also sharing our knowledge and helping each other grow!

Unfortunately, not all feedback may net positive or be productive for the team or culture or product — even if well-intentioned, and even if the suggested changes might be technically better for the codebase (at least from a narrow perspective, not considering opportunity costs). Though this is not to discourage code review, as any code review is better than none in almost every case.

Sometimes feedback can come across as overly-judgemental, minor or nitpicky, or uninformed by overall priorities. Even when there is some value in this kind of feedback, there are often more helpful ways to express it or see it through.

We have all failed in our code review quality before. So nobody specifically is really to blame here. Just like coding itself, code review is a skill — and getting better at it is part of every engineer’s and team’s journey, including my own.

Let’s get into it together!

Table of Contents

How It Can Be Counterproductive

Even When Well-Intentioned

Code review almost always comes from a good place and is helpful. Many or most of the feedback given was probably quite constructive and informative. Either way, the code reviewer took time out of their day to help a peer with their work. It’s important to remember that — even at times when the feedback can feel bad, and even when it’s ultimately counterproductive.

Unfortunately, there’s that pesky (and, to be fair, very selectively applied) idiom about the road paved by good intentions…

Here are some well-intentioned reasons why some counterproductive code review might occur — maybe:

  • We truly enjoy helping and educating others, and are willing to spend time on this — but we end up spending more time than it’s worth.
  • We don’t have enough time in the day to focus deeply on a given review.
  • We don’t have enough context or experience to know how much our feedback is going to matter.
  • We are being asked to review code that we don’t understand well enough to review more usefully — maybe the team has yet to sufficiently onboard us to the code area, or the organization has only recently shifted ownership to us.
  • We catch significant problems often enough, but unpredicatably, so it’s always worth a shot to call things out.
  • Something we’re suggesting is just genuinely more important to us than to the change author or to others.
  • The change author just doesn’t quite yet understand why it’s important (at least to us, if not in general).
  • We would like the team to discuss a particular pattern, but we don’t have opportunities or the confidence to raise those to a larger group in a separate forum.
  • As reviewers, we are unaware of pressures or priorities that would make blocking or stalling the change a suboptimal decision.

Assumptions About Others

Sometimes we make negative assumptions about the change author. Or we phrase feedback in a way that doesn’t make it clear to the change author that we do not have a bad opinion of them.

There are many reasons why someone who is generally a good engineer might raise a change that gets a lot of feedback.

Perhaps the team has yet to sufficiently onboard the code author in the code area, or the organization has recently shifted the ownership to the change author.

Perhaps the change author or reviewer comes from a prior team where things were done differently for valid reasons.

Harsh or even just matter-of-fact feedback that doesn’t express empathy to these possibilities can often unnecessarily trigger feelings of inadequacy (e.g., imposter syndrome) in the change author. “Was I supposed to know this already somehow?”

Sometimes, the change author actually is aware of the issue already and has considered the trade-offs that the reviewer will raise. If feedback surrounds too obvious of a topic, the author may feel insulted at any assumptions being made about their level of knowledge, foresight, or care.

(Though, to be fair, the change author could have anticipated the feedback and prevented this by including code comments to explain their reasoning before asking for review.)

Unhealthy Debate

Have you ever seen code review conversations start a debate and turn sour? A debate creates winners and losers. People will rarely back down once they’ve staked their name on a stance. A lot of irrational energy will be expended defeating the countering position. Small differences of opinion can spiral into extended debates, ending unproductively and in bad feelings.

Velocity Impact

All code review slows velocity in near-term, and that’s intentional. Feedback and code quality is good, but wasted time and extensive rework is not.

Counterproductive feedback reduces velocity more than it buys back in quality. Or worse, reduces mean-time-to-recovery, in the case of a production bug fix.

As a side effect, it can also increase stress and frustration, especially when there is significant pressure to roll out the general thrust of the code changes in a way that adds far more value to the company than the changes suggested would add. This pressure or value ratio is often unknown to the reviewer.

When we start a comment thread on a proposed code change, it opens an asynchronous back-and-forth that must be checked back on repeatedly and followed up on by both parties, with a lot of time spent typing out bespoke responses in each case, not knowing yet if the change author has replied or already fixed the issues, and continuing until an apparent agreement or resolution is acheived. The same kinds of conversation on the same topics seem to happen over and over again.

Opportunity Cost

Similar to the velocity impact discussed above, but specifically try to imagine all the other efforts toward code quality that could be invested in with the time spent on code review — this is part of what can make some code review counterintuitively counterproductive! Some of the Solutions at the end of this post are a much better investment.

How It Becomes More Counterproductive

Unfortuntaely, there are also a number of perhaps less well-intentioned things that can happen in a team, and might have a negative influence on code reviews and team health.

Prestige & Competition

We’re humans. We have may to wrangle with a temptation to compete with our peers. Even just to feed our own egos. Or due to personal conflict. But especially if our career growth depends on being considered an expert in some areas, and importantly, more of an expert than others. To prove our expertise, we might be tempted to one-up and correct each other wherever possible, even on minor stylistic choices or programming mistakes.

It’s crucial for managers to encourage, expect, recognize, and reward positive “soft skill” behaviors like openness, communication, mentorship, etc. (See Value Soft Skills below). If they do, then pretige chasing and competitive behaviors will not be rewarded. Expertise alone will not necessarily be rewarded. But when they don’t factor soft skills in as much as they should, or don’t have visbility into who expresses them and how, they may have to make tough calls about how to allocate promotions and raises, and end up basing it largely on visible technical expertise, i.e., those who most often (or most loudly) have the “right” answers and whose opinions and knowledge areas have defined the current architecture, in many cases due to “winning” previous debates, or intimidating others away from trying to influence.

But the bottom line is this: code review is not a place for us to build our own resume! It’s to help the change author and help our codebase. A good codebase serves no purpose if the team that supports it collapses due to alienation by its more skilled members. We want to grow our career in a place with a healthy project and that requires a healthy team around it.

Quantification of Job Performance

In our industry, sometimes our peers or managers will attempt to measure our job performance — for example, when looking at code review participation, by pulling some statistics like “how many comments on average were left in each code review” or “how many code reviews were done in the last month.”

While there may be merit in the spirit behind these attempts, and they can even be helpful tools in practice if used carefully — this may also have the adverse side effect of introducing perverse incentives into the system that can skew behavior, including in code reviews. Engineers may consciously or unconsciously “gamify” these numbers, or change what they do to tune it to what’s being measured, rather than to actual value — i.e., toward maximizing code review quantity in a way that can be decoupled from or even antithetical to quality.

When pursuing quantity in reviews, it can be easiest to latch on to low hanging fruit — simple, common patterns that we’ve all seen before, that deviate from team convention or personal preference of the reviewer. The smaller and more frequent the pattern, the easier it is to call out, and to seemingly check off that box of code review responsibilities. Sometimes, we just have to say something, anything, even if we know it’s minor or nitpicky.


While we don’t have all the answers yet as a community or industry, here are some strategies that can help improve our code review and our lives in general!

Value Soft Skills

Invest in soft skills. Encourage your team to as well. Recognize others when they show skill here & elsewhere. Be an advocate for a healthy engineering culture. Let your manager know you are investing in these skills and how much you are mentoring, teaching, learning, etc.

Tactful communication is a skill that can be difficult to learn and not as common as we might hope. And even harder than crafting the content of our messages, is to know what is worth talking about.

There are ways to deliver feedback in gentle and forgiving ways that aren’t as likely to be perceived negatively.

  • Be courteous in your code review — it’s there to help the change author!
  • Acknowledge the merits of the existing code changes as-is — even if minor — your suggestions will be better received.
  • Show humility — admit to the limits of your own knowledge, and recognize when you might be overly opinionated.
  • Which are more approachable, statements or questions? Have you considered the Socratic method? 😉
  • Indicate if feedback is minor or can optionally be done as a follow up in future — and offer to help file issues to track that work.

I can strongly recommend a read through “Software Engineering - The Soft Parts” by Addy Osmani.

Prioritize, Don’t Perfect

Engineers, including me, can tend on the perfectionist side. Yet, we know we have never and will never be able do everything perfectly.

We don’t expect perfection of ourselves in our own code — otherwise, we would never deliver. We should not expect this of others.

We should not comment on every slightly imperfect thing we see.

We can avoid leaving too many minor comments on a given code change, especially if there are already a few major patterns on which we should focus (and on which to let the change author focus).


Try pairing on code review with the change author! Get on together in person, or on video call with screen share to work through some code — cameras optional but encouraged! It feels more personal than comment threads, and it’s easier to clarify what we mean in real time.

The change author may feel more comfortable receiving feedback or asking questions when it’s not part of the permanent public record on a pull request where the whole team might see.

Or even better — try to pair earlier in the code writing process, so that code delivered in a better state upfront, and heavy late stage review is not necessary.


General code conventions intended for everyone to follow everywhere should not only exist as one-off code review comments on specific change requests, and otherwise only in the heads of specific reviewers. If the ideas are important enough, then we should write them down, just as we write down code itself.

This will be much more efficient than each engineer repeating the same advice over and over in each code review.

(Note: GitHub does have a “saved replies” feature, but at time of writing, it’s managed per-user, rather than shared across the team.)

Not only will this content be more consistent and higher quality, but it will also reduce time spent by code reviewers, freeing them up for more important work.

Plus, as a change author, seeing someone drop a link to existing documentation, rather than writing up a custom explanation, feels immediately less personal and less critical — rather, more informative and constructive. I am comforted to see that it’s apparently a common enough issue among my peers to have been selected for documentation previously, — meaning it’s clearly not me alone who has made the mistake.

Automation & Tooling

Even better than documentation in many ways, another way of saving time and reducing drama, as well being able to enforce conventions more reliably, is to invest in automation, continuous integration, code linting, and other code quality tools.

A message from a bot or a continuous integration script doesn’t feel so personal. Just make sure you configure or build the tool to have very clear and actionable messaging, since it’s not a two-way conversation like code review is. Your tools’ messaging can & should even link back to your documentation!

I’m planning a post on linting in the future, so look forward to that! 🎉

Thanks for reading! 🙇