A boardroom manager with a NO AI sign on the wall while all employees secretly use AI chatbots on their devices

Here's something uncomfortable.

Right now, while you're deciding whether or not to "allow" AI in your organization, your team is already using it. At their desks. On their phones. On company devices. Most of them aren't telling you.

A global study cited by Business Insider found 57% of employees admit to hiding their AI usage from their employers. More than half the people using AI at work are keeping it from you.

Ivanti's 2025 Technology at Work Report, surveying over 6,000 office workers, found 1 in 3 employees who use generative AI keep it secret from their employer.

So if you're sitting in the executive suite convinced your "no AI" policy is working... you're not protected. You're the last to know.

Why Leaders Default to "No"

The instinct to ban things you don't understand is old as leadership itself. New tool shows up, risks aren't clear, lawyers get nervous, IT raises concerns, and the easiest decision is to say "not yet."

The problem is "not yet" became permanent for too many organizations. While leadership deliberated, employees stopped waiting.

They downloaded ChatGPT. Signed up for Copilot. Started using Gemini to draft emails, debug code, summarize reports, and do in an hour what used to take a day. All of it without asking, because they figured someone would say no.

Gartner found 67% of employees use AI or machine learning solutions without explicit organizational approval. Software AG research puts it higher: 75% of knowledge workers are already using AI, and many say they'd keep using it even if told to stop.

This isn't rebellion. It's adaptation. Your people are trying to do their jobs well. You've left them to figure it out alone.

The Ban Creates a Worse Problem

Banning AI doesn't eliminate the risk. It concentrates it and hides it from view.

When employees use unauthorized tools openly, there's at least a chance someone notices and starts a conversation about governance. When they use those same tools in secret, nobody knows. Data flows into public AI systems without oversight. Sensitive customer information gets pasted into ChatGPT prompts. Code gets reviewed by models trained on who-knows-what. You won't find out until something breaks.

A split image showing a banned AI policy document on one side and a confident leader holding a clear AI usage guide on the other

63% of companies have no AI usage policy at all. Not a considered "no." No policy. A vacuum. And employees fill vacuums with their own judgment.

Some of it is fine. Much of it introduces legal, compliance, and data security risk the organization has no visibility into.

The ban didn't prevent the risk. It made the risk invisible.

What This Looks Like in Practice

Let me tell you what happens when organizations try to lock down AI.

The engineers use GitHub Copilot on personal devices and commit the output. The HR team uses ChatGPT to draft job descriptions because it's faster than the internal process. Customer success uses AI to summarize support tickets before escalating. Sales uses it to prep for calls.

All of it happening. None of it visible to leadership. All of it carrying risk the organization never approved, because the organization never bothered to create a policy.

The employees hiding their AI use aren't bad actors. Ivanti's research identified three common reasons people conceal it: they want a competitive edge, they fear looking like they're relying on a crutch, or they're worried about job security. They're not trying to cause problems. They're trying to survive in an organization not keeping pace with the tools available to them.

The Right Move

Your AI strategy shouldn't be "yes" or "no." It should be "here's how."

You don't need to become an AI expert overnight. You don't need to deploy an enterprise AI platform or hire a Head of AI or write a 50-page governance document. Start with something honest and simple.

Tell your team what's allowed. Pick a few approved tools. Put them in writing. If you're comfortable with people using ChatGPT for non-sensitive tasks, say so. If company data should never go into external AI systems, say so. Clear rules beat no rules every time.

Ask what people are already using. You'll be surprised. And you'll learn more in one honest team conversation than in six months of enforcement theater. Your people have already run the experiments. Let them tell you what works.

Build a reporting norm, not a blame culture. The fastest way to drive AI underground is to punish people for using it without permission. The fastest way to bring it into the open is to treat AI tool usage as a normal topic in team conversations.

Set the data boundaries clearly. This is where the real risk lives. It doesn't matter which AI tool someone uses, as long as they know which data categories are off limits. Personal data. Financial records. Customer information. Internal code. Define the lines and make them easy to remember.

Lead by example. If you're a leader who's never tried an AI tool, start. Not to become a power user, but to have an informed opinion. Your team deserves an informed opinion from you.

A diverse tech team openly using AI tools at their desks with a supportive, engaged leader standing among them

The Fear Underneath the Policy

I think the real fear isn't the technology. It's loss of control.

If your team is using AI to do more in less time... what does this mean for headcount conversations? If AI writes the first draft... whose work is it? If errors appear in AI-assisted output... where does accountability land?

These are real questions. They deserve real answers. But "no AI" doesn't answer them. It postpones them while the gap between your policy and your team's reality widens every month.

Ben Morton, a leadership coach who works with organizations on exactly these challenges, makes the point directly: if your only AI strategy is "don't," you're not making people safer. You're making yourself less informed.

There's also the reverse error worth naming. Outsourcing your thinking to AI and accepting its outputs without judgment is its own kind of leadership failure. The goal isn't to ban AI or surrender to it. It's to lead your organization through it with clear thinking and honest conversation.

I've written about the trust side of this on Step It Up HR before. When employees feel they have to hide what they're doing to get their work done, the organization has a problem going deeper than any tool or policy. It has a trust gap. And trust gaps don't close with bans.

The Leaders Getting This Right

The organizations doing AI well right now aren't the ones with the most sophisticated technology. They're the ones who started the conversation early. They named the risks, set the boundaries, and gave their teams permission to experiment inside a defined space.

Their employees aren't hiding anything. Their data governance is intact. Their risk exposure is known. And they're compounding productivity gains month over month while competitors are still debating whether to write a policy.

The gap between those two groups is going to widen, not close.

So here's the question: do you want to be the leader who found out what your team was doing with AI... or the one who shaped what they did with it?

The first one learns too late. The second one still has choices.