Tech loves this story about itself. The best engineers rise. The smartest ideas win. Your code speaks for you. Pull requests are your CV.

I believed this for years. The numbers changed my mind.

The research doesn't just challenge the meritocracy story in tech. It shows the story itself makes things worse.

Performance review comparison showing different bonus numbers for equal performers

The Paradox You Need to Know About

In 2010, researchers Emilio Castilla (MIT Sloan) and Stephen Benard published findings in the Administrative Science Quarterly worth sitting with.

They ran controlled experiments with 445 MBA students reviewing employee performance profiles. When managers were told their organization valued meritocracy, male employees received average bonuses of $418.80. Equally performing female employees received $372.40.

A $46 gap. Identical performance. Different outcomes.

When the meritocracy label was removed, the gap reversed. Women averaged more.

The researchers named this the "paradox of meritocracy." When people believe the system is fair, they stop scrutinizing their own decisions. They give themselves permission to follow gut instinct, because of course they work in a meritocracy.

Harvard's Digital Data Design Institute confirmed the pattern: consulting firms achieved gender balance at entry level over 20 years ago. Less than 20% of managing directors and partners are women today. Two decades of full pipeline. Same leadership profile at the top.

What the Numbers Say About Tech

Pew Research surveyed over 2,300 STEM workers in 2017. The findings are uncomfortable.

74% of women in computer jobs reported experiencing discrimination because of their gender. Among men in the same roles: 16%.

In 1990, women held 32% of computer occupation roles. By 2022, the number had dropped to 24%, per US Department of Labor data.

50% of women leave tech careers by age 35. In other industries, the figure is 20%.

Pyramid illustration showing diversity gap between base and leadership levels

For every 100 men promoted to manager, 81 women advance. For Black women, the number drops to 54.

McKinsey estimates full gender parity in tech leadership is 50 years away at current trajectory.

These aren't the numbers of a meritocracy. They're the numbers of a system where the meritocracy label does heavy lifting to prevent anyone from looking at outcomes.

What Actually Rises to the Top

Dr. Tomas Chamorro-Premuzic, professor of business psychology at Columbia University and University College London, spent years researching why leaders rise. His conclusion: we select for confidence, not competence.

We mistake charisma for capability. We reward people performing certainty. We penalize those expressing doubt or seeking input. The traits making someone look like a leader, loudness, self-promotion, projecting unearned authority, correlate weakly with the traits making someone an effective one.

Tech amplifies this. The heroic lone engineer working all weekend. The founder who builds the deck without asking anyone. The architect whose vision is always clear and never needs other perspectives. These figures get celebrated. The collaborative, self-aware engineer building trust across teams? Less often promoted. Wrong template.

The McKinsey research Chamorro-Premuzic contributed to found the traits correlating with effective leadership are empathy, self-awareness, integrity, and humility. None of these are the traits we instinctively reach for in hiring or promotion discussions.

My own research found 99.5% of survey respondents have had one or more types of bad bosses. The meritocracy myth is part of how they kept getting promoted.

The Language Keeps Bias Invisible

"Cultural fit." "Executive presence." "Gravitas." "We need someone who will own the room."

These phrases are everywhere in tech hiring and promotion. They sound like merit. They aren't. They're subjective descriptors without measurable criteria, creating space for bias to operate without accountability.

When "cultural fit" means someone whose communication style, educational background, and professional journey feels familiar to the people making the decision, you're building a system replicating the existing leadership. Not selecting for capability.

Carol Edwards at Diversity Dashboard makes this plain: "Merit is rarely assessed in isolation. It is filtered through perception, expectation, familiarity, and networks."

Advancement requires more than strong results. It requires self-promotion, strategic networking, and visible confidence. These behaviors show up unevenly across demographic groups, not because of inherent differences, but because of decades of structural signals about who is expected to display them.

Two identical trophies, one elevated on a tall pedestal and one sitting on the ground

The AI Angle

There's a new dimension worth noting. AI is increasingly used in hiring and performance evaluation. When companies use AI to select leaders, researchers found the algorithms nominate men 80% of the time and women 20%.

The AI isn't biased. The training data is. The AI learned from decades of promotion decisions and replicated them. If your underlying process is biased and you add AI on top, you aren't removing bias. You're systematizing it.

The meritocracy story gets a technological veneer and becomes even harder to challenge.

What to Do About It

I'm not writing this to assign blame. I'm writing it because the meritocracy story is preventing tech from building better systems.

If you believe your process is fair, you won't audit it. If you believe the best people rise, you won't question why your leadership team looks the same decade after decade. The belief itself is the problem.

Castilla and Benard's proposed fix is practical: reduce managerial discretion, increase transparency, define competency criteria clearly and measurably. Run regular audits on outcomes by demographic. If the numbers show a gap, the "meritocracy" label is hiding something worth knowing.

Four things worth pushing on in any tech organization:

Audit promotion outcomes. Not intentions. Not process descriptions. Actual outcomes, by demographic. The gap is usually there when you look.

Kill subjective criteria. If you cannot measure it, and two different managers would not consistently apply it the same way, it isn't a selection criterion. It's a preference.

Watch who gets high-visibility work. Research consistently shows high-visibility assignments show up unevenly. The people receiving them develop faster and get promoted more often. This is where much of the gap develops, long before any formal promotion decision.

Stop treating confidence as competence. Competent people often express uncertainty. Overconfident people rarely do. Structuring hiring and promotion to reward certainty will consistently select for confidence over capability.

The Story Is the Problem

Tech will keep believing in meritocracy because it's a flattering story. It tells people at the top they earned it. It tells people passed over they simply weren't good enough.

But when a system produces consistently skewed outcomes despite claiming to reward merit, the story needs challenging. Not to make anyone feel guilty. To build systems doing what the story promises.

The data is clear. The outcomes are measurable.

The question is whether you're willing to look at them.

What would your organization find if it audited who rises, and why?