Every engineering leader has sat through some version of the "build trust with your team" talk. It sounds like a workshop topic. It sounds soft. It sounds like the thing you say and then move on from so you get to the real stuff... velocity, deployment frequency, technical debt.

The data disagrees. Trust IS the real stuff.

Fragmented gears representing the cost of low trust in engineering teams

Google Spent Two Years Looking for the Secret to Great Teams

Starting in 2012, Google's Project Aristotle studied 180 of their own teams (115 engineering, 65 sales). They examined 250 different attributes over two years. They expected to find the obvious answer: hire brilliant people, give them a strong manager, and the team performs.

Wrong.

The #1 factor for high-performing teams was psychological safety. Not raw talent. Not co-location (whether people work in the same building turned out to be irrelevant). Not seniority. Not individual performance ratings. The factor mattering most was whether people felt safe enough to speak up, take risks, and admit mistakes without fear of punishment.

Google's researchers put it plainly: "Even the extremely smart, high-powered employees at Google needed a psychologically safe work environment to contribute the talents they had to offer."

Psychological safety is trust. Trust in the team. Trust in the leader. Trust in the system.

DORA Measured It at Scale

The DevOps Research and Assessment team has been studying software delivery for over a decade. Their 2024 State of DevOps report drew on responses from more than 39,000 professionals globally.

Their finding is consistent year after year: generative culture (high cooperation, shared risk, blameless failure inquiry, active cross-boundary collaboration) directly predicts software delivery performance. Organizations with the highest trust ship faster, fail less often, and recover faster when things go wrong.

The numbers are telling. Elite-performing teams spend 50% of their time on new, high-value work. Low-performing teams spend only 30% there. Elite performers spend 10% of their time fixing user-identified defects. Low performers spend 20%.

Performance dashboard showing delivery metrics improving with high trust

Not a skill gap. Not a tooling gap. An environment gap. Low-trust teams lose time to rework, defect remediation, and second-guessing. High-trust teams spend the same time building.

Low Trust Breaks Your Ability to See Reality

This is the part nobody talks about enough, and it is the insight changing how I think about the whole problem.

John Cutler, in his newsletter piece "Trust lets you observe reality", makes a point worth sitting with: when trust is low, organizations lose their ability to understand what is genuinely wrong with them. They reach for oversimplified metrics. Those metrics become targets. And when a measure becomes a target, it ceases to be a good measure. Goodhart's Law, applied to teams.

In a low-trust environment:

  • Engineers optimize for what is measured, not what matters
  • Problems get hidden until they are too big to ignore
  • Post-mortems quietly blame individuals rather than systems
  • Your dashboards look fine right up until they do not

Research from Adaptavist found 74% of knowledge workers do not consistently understand the "why" behind their workplace tasks. Of those, 45% report reduced motivation. In a low-trust engineering organization, nobody explains the reasoning behind decisions. The "why" disappears. People stop caring. Velocity drops. You measure velocity harder. Trust drops further.

The cycle feeds itself.

What Low Trust Looks Like Day to Day

You might not recognize your team in phrases like "low trust." Here is what it looks like on the ground.

Code reviews go adversarial. Engineers write defensive comments. Junior developers stop asking questions. PRs sit for days because nobody wants to make the first move.

Incidents get vague write-ups. "A deployment caused an outage." Not "the deployment process allowed this error through." Not "we need to change our rollout strategy." The post-mortem exists to satisfy a process, not to learn anything.

Technical debt accumulates without discussion. Engineers know it is there. They do not raise it because the last time someone raised it, they were told to "focus on features." So the debt grows. Eventually it eats a sprint.

Good people leave. They cite "better opportunities." What they mean is: I trust my next employer more.

What High Trust Looks Like

High-trust engineering teams are not teams without problems. They are teams where problems surface fast.

A diverse engineering team reviewing code together in a high-trust environment

Engineers speak up early when something is going wrong. Post-mortems examine systems, not people. Technical debt gets named and prioritized. Deployment frequency is high not because people are reckless, but because they trust the safety net... and trust each other to fix things when they break.

DORA's research describes this through the Westrum model. Generative organizations are characterized by high cooperation, shared risks, and active encouragement of cross-silo collaboration. Compare to pathological cultures (information hoarded, failure punished, cross-team collaboration discouraged) and the performance gap is not marginal. It is the difference between elite and low-performing.

Where to Start

I am not going to tell you to run a team retrospective on trust. A trust workshop gives you a nice sticky-note session and zero lasting change.

Here is what moves the needle.

Fix the post-mortem first. If your incident reviews end with blame (explicit or implicit), nothing else you do will stick. Make it institutional: the goal of a post-mortem is to find the system failure, not the human failure. When your people see you mean it, the culture begins to shift.

Explain the reasoning behind decisions. Every time a major architectural, process, or product decision gets made, write one paragraph explaining why. Not a press release. A real explanation. 74% of knowledge workers lack this context, and it drains motivation at a scale most leaders do not notice.

Make safety explicit in code review. A comment asking a question beats a comment passing judgment every time. "I'm thinking about X... what was your reasoning for Y?" is different from "this is wrong." It models the behavior you need from your whole team.

Admit your own failures publicly. In front of your team. When you make a call and it does not work out, name it. Describe what you would do differently. Not weakness. It is the foundation of a culture where your engineers do the same.

Stop measuring things nobody understands. If your team does not know what a metric measures or why it matters, drop it. Metrics without context become weapons in low-trust environments. Someone gets blamed for them eventually.


Trust is not a workshop. It is not a vibe. It is the infrastructure on which your engineering performance is built.

Google measured it. DORA measured it across 39,000 practitioners and a decade of research. The teams shipping the most, with the fewest defects, recovering the fastest... those teams built trust first.

What would it cost you to calculate the trust deficit on your own team?