I've sat through dozens of performance reviews. Given them. Received them. Watched talented engineers resign because of them. And not once did I walk away thinking: this process made my team better.

A dusty filing cabinet overflowing with annual appraisal forms

The US Army invented the annual performance review during World War II. Frederick Taylor's scientific management principles gave it shape. IBM and GE picked it up in the 1950s. It made sense when work was predictable, individual, and measurable in units. Widgets per hour. Lines typed. Boxes checked.

Software engineering is none of those things. And yet here we are, in 2026, still running the same ritual.

Somewhere between World War II and the present day, performance management became an elaborate compliance exercise rather than a tool for developing people. HR departments built systems. Managers trained to use them. Employees learned to game them. Everyone agreed the process was broken. Nobody stopped doing it.

Until some of them did.

What the Data Says

Here are the numbers:

  • 95% of HR leaders report being unsatisfied with traditional performance appraisals.
  • 77% of those same leaders agree conventional reviews don't capture accurate employee performance.
  • 85% of employees say they'd consider leaving after receiving an unfair assessment.

This isn't a fringe position. This is near-universal consensus from the people running the process. We're administering a ritual 95% of HR leaders don't believe in, one 77% admit doesn't measure the right things, one sending 85% of employees mentally toward LinkedIn.

This isn't a broken process. It's a broken ritual we inherited and never questioned.

Ask the obvious question: if 95% of HR leaders are unhappy with performance appraisals, and 77% admit those appraisals don't even measure the right things, why do 71% of companies still conduct annual reviews? The answer isn't evidence. The answer is inertia. It's what we've always done. It's what the HR software supports. It's what the compensation cycle assumes. Nobody wants to be the person who breaks the ritual without something to replace it.

Something exists to replace it. We'll get there.

Why Software Teams Suffer Most

Annual reviews are bad for most knowledge workers. For software engineers, they're specifically damaging.

Memory decay kills accuracy. Ask your manager to recall a specific architectural decision you made nine months ago. They won't. The annual review doesn't assess your year. It assesses your most recent 6-8 weeks, dressed up as a 12-month verdict. Engineers who spent the first half of the year doing hard, foundational work get judged on whatever they shipped in Q4. The invisible work, the refactoring, the system stability improvements, the mentoring of junior developers... none of it registers.

Individual metrics don't fit team work. Software is collaborative. Code reviews, pair programming, mentoring junior developers, unblocking other teams... none of this shows up cleanly in "tickets closed" or "commits pushed." When you reward those numbers, engineers game them. They stop taking on complex, uncertain problems. They avoid experimental approaches likely to fail. Innovation dries up, systematically.

Annual reviews undermine agile. If you run sprints and retrospectives, you've already built feedback loops every two weeks. Then you ask your engineers to wait 12 months for a meaningful performance conversation. The incoherence is staggering. Agile delivery, waterfall HR. Pick one.

Then there's the calibration problem. In large tech organisations, managers sit in calibration meetings comparing their team's ratings against other teams. Engineers get ranked against engineers in entirely different contexts, doing entirely different work. The engineer maintaining critical legacy infrastructure gets marked down against the engineer who shipped a shiny new feature. The process rewards visible work. It penalises essential work.

The Psychological Cost of the Annual Surprise

Here's something nobody talks about: review season itself damages performance. For 6-8 weeks before and after, your team is distracted. Anxious. Playing politics. Writing self-assessments instead of shipping code. Managers are calibrating ratings instead of developing their people.

Then the review happens. The engineer who worked tirelessly on infrastructure for 8 months, solving problems nobody noticed, walks out of the meeting feeling undervalued because Q4 was quieter. Their manager, scrambling to remember specifics, gives generic feedback. The engineer updates their CV the same evening.

According to SelectSoftwareReviews, 62% of millennials report being blindsided by their evaluations. Not surprised. Blindsided. The feedback was so disconnected from their daily experience of work, it felt like it came from a different person about a different job.

Annual reviews don't fail to improve performance. They destroy trust.

A modern software team in a standup, collaborating around a sprint board

The Companies Already Moving On

Adobe scrapped annual reviews in 2012, replacing them with regular "Check-in" conversations between managers and employees. The outcomes were measurable: unwanted attrition dropped by nearly a third. Adobe saved an estimated 80,000 manager hours per year previously spent on the review cycle. Microsoft eliminated ratings altogether, concluding the system created internal competition instead of collaboration.

The Gap moved to monthly one-on-ones and saw a 40% increase in employee engagement within 18 months.

These aren't small startups running people experiments. These are large, serious organisations who looked at the evidence and made the obvious call. The annual review costs more than it delivers.

Trust Is the Better Metric

Trust outweighs tick-boxes on a scale

Kelly Swingler asks a sharp question: who's brave enough to measure people on trust, not tick-boxes?

It sounds soft. It isn't.

When employees trust their managers, they're 5 times more likely to be engaged. Organisations with continuous feedback cultures outperform peers by 24%. Teams receiving weekly feedback show 14.9% lower turnover.

Trust-based performance management doesn't mean no accountability. It means redesigning the rhythm of feedback entirely.

  • Regular one-on-ones, weekly or fortnightly, not quarterly
  • Feedback delivered close to the moment it's relevant, not months later
  • Expectations set clearly at the start of a project, not evaluated a year after
  • Conversations about growth and direction, not judgement and ratings
  • Honest dialogue about what's working and what isn't, while there's still time to act

For software teams specifically, this means using sprint retrospectives as feedback moments, not purely process reviews. It means recognising the engineer who unblocked three colleagues during a rough sprint, even if their own ticket count was low. It means asking "what did you need from me this week and didn't get?" instead of saving everything for December.

My 360-degree feedback tool at Step It Up HR is built on this premise: feedback needs to be specific, meaningful, and actionable. Not a year-end shock. Not a number on a form. The moment it becomes a ritual obligation, it stops working.

The Question Worth Asking

If your team dreads review season...

If your managers spend weeks writing performance notes for conversations going nowhere...

If your best engineers get rated on things unrelated to what they contributed...

...the annual review isn't broken. It's working exactly as designed. The design is the problem.

The question isn't whether performance appraisals work. The data settled the argument years ago. The question is whether you're brave enough to replace them with something real.

Start with one conversation. Weekly. No form. No rating scale. No year-end surprise. See what changes.

Then tell me the annual review was worth keeping.