Most businesses running 40 to 150 people are already sitting on more AI agents than anyone counted. The tools went in department by department, each solving an immediate problem. A sales rep found a prospecting tool that cuts research time in half. Marketing signed up for an AI email assistant. Operations built an automated workflow through a no-code platform. Finance started using an AI that pulls from the accounting system.
None of those made it onto an approved software list. Most did not go through IT.
That accumulation of autonomous systems, running independently, accessing different data, connecting to different platforms, with no centralized list of what any of them are doing, is what the industry now calls AI agent sprawl. Gartner flagged it in April 2026 as the next massive challenge for IT leaders. The numbers behind that assessment are worth understanding.
What AI Agent Sprawl Actually Means
An AI agent is not the same as a chatbot you open and close. Agents take actions. They read emails, update records, send messages, call APIs, and make decisions without waiting for a person to click approve. Many run continuously in the background.
When a business has one or two of these under IT supervision, that is manageable. When every department deploys their own, often without mentioning it to anyone, the picture changes fast.
Gartner projects that by 2028, a typical Fortune 500 will run more than 150,000 agents, up from fewer than 15 in 2025. That curve hits mid-market companies too, just at a smaller scale. A 75-person company that had no formal AI strategy in 2024 might be running eight or ten agents today if you count everything the team has signed up for.
A 2026 Salesforce Connectivity Benchmark Report found that more than four in five IT leaders believe the proliferation of AI agents will create more complexity than value due to integration challenges and silos. A separate survey found that 87% of IT leaders say AI agents are already embedded in critical systems. Only 25% claim full visibility into what is actually running.
That gap between deployment and visibility is the core problem.
How It Happens at a 50-Person Firm
No department creates sprawl on purpose. It builds the same way shadow IT always has, one tool at a time, because each tool solves an immediate problem.
A sales rep finds an AI prospecting tool that cuts research time in half. They sign up. It connects to the CRM. Another person sees it working and shares the login. Now an AI is accessing your customer data through a personal account with no IT review and no security audit.
A marketer builds an automated content workflow. It pulls from their inbox, drafts responses, schedules posts. Efficient, yes. But the system is reading internal communications and writing on the company's behalf, and it was set up in an afternoon without a vendor review.
Microsoft's 2026 Cyber Pulse report found that 29% of employees are using unsanctioned AI agents for work tasks, and that more than one-third of workers share sensitive company information with AI tools without employer permission. Netskope found the average enterprise logs 223 data policy violations per month related to AI usage.
Scale those numbers to a 60-person company and the pattern looks the same, just smaller. The violations are happening. They are not getting counted yet.
The Risks That Catch Teams Off Guard
The governance problem with AI agents is different from the shadow IT problems of a decade ago. When an employee installed unapproved software back then, IT could find it and remove it. Agents are harder to audit because they operate through integrations, APIs, and third-party platforms that live outside your internal network.
Three specific risks come up most often.
Data exposure. Every prompt an employee sends to an external AI model is data leaving your environment. If that tool was not vetted, you do not know how that data is stored, whether it is used to train other models, or who else can access it. For companies under HIPAA, SOC 2, or state-level data regulations, that is not a theoretical risk.
Identity and access chaos. When an agent connects to a system, it often does so using credentials that nobody formally provisioned. A developer links an AI to an internal database through a service account. The agent runs indefinitely. The person who set it up leaves the company. The access stays. IT finds out during a security audit, not before.
Compounding errors at scale. AI agents do not make mistakes slowly. They make them at machine speed, across every record they touch. One misconfigured agent can update hundreds of CRM records or send hundreds of emails before anyone notices. Catching problems early requires visibility you cannot have if you did not know the agent was running.
What Good Governance Looks Like
Gartner's April 2026 framework for managing agent sprawl comes down to one core prerequisite: a centralized inventory before anything else. You cannot govern agents you do not know exist.
Businesses getting ahead of this are doing a few practical things. They are running an AI tool audit across every department. They are assigning explicit ownership to each agent. Not just "the sales team uses this" but a named person accountable for how it behaves and what it can access. They are putting new AI tool requests through a lightweight review before deployment, not as an afterthought.
Good managed IT services treat AI agent governance the same way they treat software licensing and access management. Someone needs to own the inventory, keep it current, and review it on a regular schedule. Without that structure, the list does not stay current. It just gets longer and less visible.
For businesses that handle sensitive data, connecting AI governance to your compliance services framework matters. The same regulations that govern how you store customer data apply to how your AI agents process it.
Frequently Asked Questions
What is AI agent sprawl?
AI agent sprawl is the accumulation of autonomous AI systems across an organization without centralized oversight. Each agent may operate independently, access different data, and run without IT's knowledge, creating governance, security, and compliance gaps.
How do I know if my business has an AI agent problem?
Start by asking every department what AI tools they use and what those tools can access or do on their own. If the answers surprise you, that is the beginning of an agent sprawl audit. Most companies with 30 or more employees find more than they expected.
Is AI agent sprawl a problem for growing businesses or just large enterprises?
Both. The scale differs but the pattern is the same. A 60-person firm running eight unsanctioned AI agents has proportionally the same exposure as an enterprise running 8,000. Data regulations and liability risks do not scale down with headcount.
What is the biggest risk from unmanaged AI agents?
Data exposure is the most common. Employees send sensitive business information to external AI services that were never vetted, and the organization has no visibility into how that data is handled, retained, or used.
What is the first step to getting control of AI agent sprawl?
Run an inventory. Ask every department to list the AI tools they use, what data those tools access, and whether IT reviewed them before deployment. The inventory does not have to be perfect. It just has to exist.
Not sure what AI tools are running across your business? An AI agent audit is a good starting point. Get in touch to talk through what that looks like.