An office worker hunched over a laptop in a dimly lit office corner, colleagues visible in the bright background unaware

Here's a number worth sitting with: 59% of employees hide their AI use from their bosses.

Not 5%. Not 10%. More than half the people on your team are doing something useful at work and actively keeping it from you.

This isn't an AI problem. This is a leadership problem.

The Numbers Are Worse Than You Think

Research from BlackFog, reported by CIO, puts it at 49% of workers admitting to using unapproved AI tools. 51% have connected those tools to work systems without telling IT. 33% have uploaded proprietary research or enterprise datasets to tools the organization never sanctioned.

Your team is doing this. Today. Right now.

Not because they're reckless. Not because they don't care about security. Because they felt they had no other choice.

The Ivanti 2025 Technology at Work Report found a third of employees who use AI keep it entirely secret from their employers. Gartner found 67% use AI tools without explicit organizational approval.

We're not talking about one or two rebels. We're talking about the majority of your workforce.

UpGuard data shows over 80% of workers use unapproved AI tools, with nearly 90% of security professionals doing so. The people responsible for enforcing your AI policies are the most likely to ignore them.

Worth sitting with.

Why Are They Hiding It?

Three reasons show up consistently in the research.

Fear of job loss. Workers worry admitting AI use will signal to leadership the role is automatable. So they use the tool, get the work done faster, keep quiet, and hope no one asks questions. They're protecting their jobs by hiding the evidence they're good at them.

Imposter syndrome. One employee quoted in the research said it plainly: "I don't want people to question my ability." They worry relying on AI makes them look incompetent when they're performing better. The tool is making them more capable and they're ashamed of it.

A private competitive edge. Some employees see AI proficiency as a personal advantage. They're not sharing it because they're not sure it's safe to share. Not safe from a policy standpoint. Safe from a cultural one.

Read the last point again. They don't think it's safe to share something making them better at their job.

This is a signal. Listen to it.

The Part Every Executive Should Find Embarrassing

A split showing fear and secrecy on the left versus psychological safety and openness on the right in the workplace

This is where it gets uncomfortable.

The same BlackFog research found 69% of presidents and C-suite members approve of unsanctioned AI use... while hiding their own. As BlackFog CEO Darren Williams put it, "Senior executives often don't want to admit they are using AI."

Your most senior leaders are doing the exact same thing they're trying to stop. They're modeling the behavior they claim to discourage. They're hiding their tools for the same reasons as their teams: fear of looking like they don't know what they're doing, fear of setting a bad example by endorsing something off-policy.

When the people at the top won't talk openly about AI, it sends one message to everyone below them.

AI is something to be ashamed of. Something dangerous. Something to do in secret.

So the whole organization complies. They hide it too.

I've written before about this dynamic in The Thing Stopping AI Agent Adoption Isn't Technology. It's Leadership. The technical barriers to AI adoption in most organizations are largely gone. The human barriers aren't. The cultural barriers aren't.

And the cultural barrier starts at the top.

This Is a Culture Problem, Not a Security Problem

Yes, shadow AI creates real security risks. Employees uploading salary data or financial records to public AI tools is a genuine problem worth taking seriously.

But framing shadow AI as a security issue misses the bigger signal entirely.

When your team uses AI secretly, they're telling you something about your culture. They're saying: "I don't feel safe being honest with you about how I work."

Most leaders never hear it. The people around them aren't honest about it.

Research from Infosys and MIT Technology Review found 83% of executives believe psychological safety directly impacts the success of AI initiatives. The same research found only 39% of organizations rate their psychological safety as high or excellent.

So the overwhelming majority of leaders know psychological safety matters for AI success. The overwhelming majority of their organizations don't have it.

The gap between those two figures is where your team's secret AI lives.

And it gets worse. The Infosys research found 22% of leaders have avoided taking on AI projects specifically because they fear being blamed if something goes wrong. Not junior employees. Leaders. People supposed to set the direction.

If your leadership team won't touch AI without cover, why would anyone else?

I wrote in AI Isn't Making You Smarter. It's Making You Lazy. about the risk of outsourcing your thinking. The shadow AI problem is the inverse: your team is thinking seriously about how to use AI well, and they're doing it without you. You're being left out of the most important capability development happening in your organization because they don't trust the culture enough to include you.

What a Safe Leader Does Instead

A team leader in open conversation with a diverse team reviewing AI tools and workflows together

The answer is not a policy. Policies didn't stop shadow AI. They accelerated it.

When you ban AI without building trust, your team goes underground. What you control is whether they feel they need to hide their tools.

Here's what safe leaders do instead.

Say it out loud yourself. Tell your team which AI tools you use. Tell them what you use them for. Tell them when AI helped you write something, analyze data, or prepare for a meeting. The moment you're open about your own AI use, you give everyone around you permission to be open about theirs. Modeling is not a leadership concept. It's the only thing people respond to.

Make experimentation visible. Build a standing agenda item for "what's working with AI this week." Make sharing wins and failures with new tools a normal part of how your team operates. Not a private activity. A team practice. When people see their colleagues using AI without shame, the shame disappears.

Separate the behavior from the tool. The real problem with shadow AI isn't the AI. It's employees uploading confidential data to untrusted systems. Address the data risk specifically. Make the security rules clear. Make the approved tools list accessible. Then step back and let people be productive.

Ask the question directly. "What AI tools are you using I don't know about?" Ask it with genuine curiosity. Ask it without a pointed finger. You'll be surprised what comes back, and more importantly, you'll demonstrate it's safe to be honest.

The UpGuard research found fewer than half of employees understand their organization's AI usage policies. You won't hold anyone accountable to rules they don't know exist. Start with clarity before accountability.

The Question Worth Sitting With

Secret AI in your organization is a symptom. The illness is a culture where your team doesn't feel safe being honest about how they work.

Every hour your team spends hiding their tools from you is an hour not spent helping you figure out how to use those tools better across the whole organization. You're losing ground. Not to competitors with better technology. To yourself.

So here's the question worth sitting with: if your team found a faster, better way to do their jobs tomorrow, would they tell you?

Or would they keep it to themselves?

The answer tells you everything about the culture you've built.