Your employees are installing AI agents into your Microsoft 365 environment without asking anyone. When those tools cause a breach, your cyber insurer may decline to pay, because your policy was written before agentic AI existed. This is not a hypothetical. It is happening to SMB clients right now, and the MSP community is sounding the alarm.
The New Shadow IT Problem Has Teeth
Shadow IT has always been a headache. Employees use personal Dropbox accounts, install unapproved browser extensions, forward work email to Gmail. IT teams learn about it after the fact, lock it down, and move on. The damage is usually contained.
Agentic AI tools are a different category of risk entirely. These are not passive apps that store files. They are autonomous systems that request broad OAuth permissions across your entire Entra ID tenant, then act on your behalf: reading email, accessing SharePoint, calling external APIs, triggering workflows, and moving data. The employee who installed the tool may not fully understand what it is doing in the background. Your IT team almost certainly does not know the tool is there at all.
The tools showing up in enterprise environments right now include open-source projects like OpenClaw, Hermes, and Cowork. They are free, easy to install, and marketed as productivity enhancers. They are also, in the words of one MSP operator, functionally similar to malware.
What the MSP and IT Communities Are Saying
A thread on r/msp put this issue in direct terms that every business owner should read. The operator described clients autonomously deploying these open-source agentic tools, tools that by design request maximum permissions across corporate networks and Entra tenants. The post warns: "These tools are in practice, similar to malware. Their inherent nature is to have maximum permissions, so they can do anything an employee can." The same operator flagged the insurance exposure plainly: "Having to remind them that their cyber insurance policy is not likely to pay out if this is breached. Policies aren't even updated for this shit yet."
That last sentence is the part that should stop you cold. Cyber insurance policies are legal contracts written to cover specific, defined scenarios. Policies issued two or three years ago were not written with autonomous AI agents in mind. Insurers are not rushing to update coverage to include losses caused by tools your own employees installed without authorization.
A parallel thread on r/ITManagers raises a second problem: even if you are monitoring your environment, traditional log-based security may be blind to what these agents are actually doing. As one IT manager put it, "Now with agents actually taking actions, logs feel kinda useless after the fact. Like cool we can see what happened after it already happened. That doesn't really help with agentic AI security risks when the agent can hit APIs, move data, trigger workflows etc." Incident response requires knowing what happened. With agentic tools operating in real time across multiple systems, by the time your logs show the damage, the data is already gone.
The cultural backdrop makes this worse. A thread on r/sysadmin with over 1,300 upvotes captures how normalized uncritical AI adoption has become at the management level. When executives are using LLMs to write and respond to emails without reading them, the bar for evaluating an AI tool before installing it is essentially zero. If leadership treats AI as frictionless and infallible, employees follow that lead.
Why This Hits NJ, NY, and CT SMBs Especially Hard
A denied cyber insurance claim is painful for any business. For a healthcare practice in Parsippany, a financial advisory firm in Stamford, or a law office in White Plains, it can be existential.
HIPAA does not care that an employee installed an AI tool in good faith. If protected health information was accessed or exfiltrated by an unauthorized application with maximum Entra permissions, OCR will treat it as a reportable breach. The fine comes regardless of whether your insurer pays. New York's SHIELD Act and the New Jersey Identity Theft Prevention Act impose their own notification and penalty frameworks on top of federal requirements. Financial services firms operating under GLBA face examiner scrutiny over unauthorized access to customer data, full stop.
Most SMBs in these industries have no dedicated IT department. They have a part-time IT person, an office manager who handles tech issues, or nothing at all. They rely on their managed IT services provider to know what is running in their environment. If that MSP is not actively monitoring for new OAuth grants and third-party app integrations, no one is watching the door.
What to Do Before the Breach Happens
The good news is that this problem is auditable and, to a significant degree, preventable. These are the concrete steps that matter.
Audit your Microsoft 365 app registrations now. In Entra ID, under Enterprise Applications, you can see every third-party application that has been granted permissions to your tenant. Many SMB owners have never looked at this list. You should. Any application with read/write access to Mail, Files, or User data that you did not explicitly approve is a risk that needs to be evaluated immediately.
Review your cyber insurance policy for AI-related exclusions. Pull your current policy and look specifically at language around unauthorized software, employee-introduced tools, and coverage conditions related to access controls. If your broker cannot explain how the policy handles a breach caused by an employee-installed agentic AI tool, escalate the question in writing. You want documentation of that conversation before a claim, not after.
Restrict OAuth consent to IT-approved applications. In Microsoft 365, you can configure user consent settings so that employees cannot grant third-party applications access to your tenant without administrator approval. This is not the default setting. Locking it down is one of the highest-value, lowest-cost controls an SMB can implement today. Your cybersecurity services provider should have this in your baseline configuration.
Establish a formal AI tool approval process. Employees are not going to stop looking for productivity tools. The goal is not to ban AI. The goal is to require IT review before any AI tool touches your systems, data, or identity infrastructure. A lightweight approval checklist, covering permissions requested, data accessed, vendor security posture, and insurance implications, is enough for most SMBs. If you need a starting point, SMS offers a free AI policy kit that you can adapt for your organization.
Ask whether your monitoring can catch real-time agent behavior. If you are relying on log review after the fact, you already know from the r/ITManagers discussion that logs are insufficient for agentic systems. Proactive controls, including conditional access policies, app governance in Microsoft Defender, and real-time alerting on anomalous API activity, need to be part of your posture. This is where the gap between a reactive IT setup and a managed security practice becomes material.
The agentic AI market is moving faster than insurance carriers, compliance frameworks, and most SMB IT setups can track. Employees will keep installing these tools because they are free, powerful, and their managers are probably already using something similar. The businesses that avoid a catastrophic outcome are the ones that audit their environments now, close the consent gaps, and make sure their insurance coverage reflects the actual threat landscape they are operating in.
If you are not sure what is running in your Microsoft 365 tenant, or whether your current security setup would catch an agentic tool operating in the background, that is the right question to start with. SMS helps SMBs across New Jersey, New York, and Connecticut get a clear picture of their cybersecurity posture and close the gaps before an insurer uses them as a reason to deny a claim.