The risk is probably already in your building. About 78% of employees are using AI tools their employer never approved. That number is not from a security vendor trying to sell you something. It comes from cross-industry usage research. The term for it is shadow AI, and Microsoft just took a full enterprise product out of preview to help companies deal with it. If you run a business with 25 to 250 employees, this belongs on your radar now.

What Shadow AI Actually Is

Shadow AI is the AI version of shadow IT. That is the long-running pattern of employees using software their IT department never sanctioned. Think personal Dropbox accounts for work files, or a team standing up a free Slack alternative because IT was slow to provision accounts.

Shadow AI follows the same logic, but the surface area is much bigger.

A customer service rep copies a client complaint into ChatGPT to draft a reply. A finance manager runs a budget spreadsheet through a free AI summarizer and sends the output to the board. Your salesperson, who figured out last month that an AI writing tool can personalize outreach at scale, has been using it on every deal since. None of these people think they're doing something wrong. They found tools that make them better at their jobs.

The problem is that the data went somewhere. And nobody at your company knows where.

The Numbers Are Not Small

About 78% of employees use AI tools their employer has never approved. In a 50-person company, that is roughly 39 people running unsanctioned AI tools on any given workday.

The density problem is worse at smaller organizations. Research estimates 269 unsanctioned AI apps per 1,000 employees at companies in the 11 to 50 headcount range. That is a higher concentration than you see at large enterprises, which at least have procurement teams and compliance review. Smaller companies skip those gates because nobody set them up.

Only 17% of organizations have any technical control that actually stops employees from pasting sensitive data into a public AI tool. The other 83% are operating on the assumption that employees will make the right call. That assumption has not held up.

What Actually Goes Wrong

The Samsung case is the one that gets cited most in security circles, and for good reason. In 2023, Samsung engineers pasted chip manufacturing source code into ChatGPT. Separately, an employee used an AI transcription tool for a confidential internal meeting. Three incidents in roughly 20 days. Samsung had thousands of security engineers on staff at the time. The incidents happened anyway, because the employees were not malicious. They were trying to work faster. For a 75-person company in New Jersey, the equivalent is a project manager pasting client product specifications into a free AI tool to generate a proposal. That data now sits on servers your business has no contract with. If that client relationship includes any kind of NDA, you have a real problem. If you operate under HIPAA or PCI-DSS, you have a compliance problem on top of it.

Industry research puts the detection timeline for shadow AI incidents significantly longer than standard breaches. Six months or more before the exposure is even identified. At that point, the damage is done and the cleanup costs are not small.

Why Microsoft Agent 365 Changes the Conversation

Microsoft moved Agent 365 to general availability at the start of May. The product is a control plane built specifically to discover and govern AI agents that are running without IT knowledge. That includes AI-powered browser extensions, third-party writing tools, and what Microsoft calls shadow agents: full autonomous AI workflows employees spin up on their own devices without involving IT at all.

The direct framing from Microsoft: you cannot govern What you cannot see.

The price point is $15 per user per month. That is not a price aimed at Fortune 500 companies. It is priced for the 50-to-500 person organization that has real AI sprawl but no dedicated security team.

Whether Agent 365 is the right fit for your environment or not, the signal matters. Microsoft does not build products for problems that do not exist. Shadow AI governance is now a market category. The tools are available and the pricing reflects the scale of the actual problem.

What You Can Do Right Now

You do not need an enterprise product to start addressing this.

Get visibility first. A basic audit of what AI qtools your team is actually using is where most organizations need to begin. You cannot make decisions about what to allow or block until you know what is already in use. Most managed IT providers can run this kind of assessment as part of a regular environment review. It typically surfaces more than expected.

Build a simple acceptable-use policy. This does not need to be long. The core questions are: what categories of business data are allowed to leave your controlled environment, and which ones stay internal? For most companies, client information, financial records, and proprietary processes are in the second category. Employees need to know that before they make the call in the moment.

Designate approved tools with proper data handling agreements. Enterprise tiers of ChatGPT, Microsoft Copilot, and Claude for Business all include data processing agreements that do not use your inputs for model training. The employee who wants AI help can still get it. They just use the version you have vetted and contracted.

If your business has compliance obligations under HIPAA, PCI-DSS, or SOC 2, this is not optional. A conversation with whoever handles your compliance work is the right starting point. AI tool governance intersects with data handling obligations in ways that are not always obvious until an incident happens.

The harder truth is that most 40 to 150 person businesses do not have anyone thinking about this daily. The employees using unauthorized AI tools are not careless. The IT environment just was not built fast enough to keep up with how quickly these tools spread over the last two years. That gap is what a managed IT partner is supposed to close.


Frequently Asked Questions

What is shadow AI?

Shadow AI is any AI tool an employee uses without IT department approval or knowledge. This covers public chatbots, AI-powered browser plugins, free writing assistants, and automated workflow tools. The issue is not the category of tool. It is that IT has no visibility into what data is moving through them.

How common is shadow AI in the workplace?

Research puts the number around 78% of employees using unapproved AI tools. Smaller organizations tend to have higher tool density, not lower, because they have fewer procurement gates and fewer controls in place.

What are the main shadow AI risks for my business?

Data exposure is the primary risk. Customer data, financial records, and internal business information pasted into public AI tools goes to third-party servers without a data agreement. That creates compliance exposure under regulations like HIPAA and PCI-DSS. Detection timelines are long, which means damage accumulates before anyone knows something went wrong.

Does this affect businesses under 100 employees?

Yes, often more than larger companies. Smaller organizations have fewer technical controls and less formal IT oversight. The employees are just as likely to use unauthorized tools. The difference is a smaller company is less likely to catch it.

What should my first step be?

Start with a visibility audit. You cannot govern What you cannot see. Understanding what AI tools your team actually uses takes a few hours with the right approach, and it almost always turns up more than expected.

If your team is using AI tools you haven't audited, now is a good time to get ahead of it. Get in touch to talk through what an AI tool audit looks like for your environment.