It is a scenario playing out in security operations centers everywhere. A team establishes acceptable use policies for an artificial intelligence web application, only to realize that the same AI model is now being accessed via a command-line interface (CLI), embedded in a third-party integration, or running autonomously as an agent. The tentacles of AI slip through everywhere it seems.
I spotted a Reddit thread recently in which a security practitioner asked for help with exactly this situation. They explained that tools like Claude were spreading across multiple entry points in their organization and were concerned that this fragmentation was making AI incredibly difficult to govern like a typical SaaS application.
The initial challenge is one of visibility, understanding where sensitive data is actually flowing across these diverse surfaces, and then of course it becomes one of control: over a nuanced array of data types, use cases, connections and justifications.
The conversation is no longer just about a user typing a prompt into a browser. We are dealing with machine-to-machine traffic, API calls, and the rapid adoption of the Model Context Protocol (MCP). And the Reddit post caught my eye because there’s new data that shows the OP is not alone with this challenge. According to the 2026 AI Risk and Readiness Report, only 8% of organizations have policies governing MCP traffic, and the remaining 92% are either not monitoring MCP or have never heard of it. MCP-based agent connections, along with API integrations and machine-to-machine (M2M) communications, dominate the difficulty rankings for the types of interactions that are hardest to monitor, according to the research.
Unifying control across all channels
So what should you do? You need a unified strategy that applies consistent security across every entry point that the tentacles of AI may be using, without slowing down the user. Shift the focus from chasing individual applications to actively inspecting the connections, protecting the data and governing the interactions.
-
Secure the AI web apps
Use your Next Gen Secure Web Gateway (NG-SWG) and Cloud Access Security Broker (CASB) to secure AI web apps. These provide granular visibility into web-based AI usage and apply appropriate policies for both access and data protection.
-
Secure the connectors bringing context
To address CLI tools, automated workflows, and app connectors, you need controls built for machine traffic. Your Agentic Broker decodes and secures MCP traffic, giving you visibility and control over connections to external data and services such as Google Drive, Slack, and SalesForce. For private app-to-LLM traffic, it’s your AI Gateway that centralizes authentication, enforces rate limiting, and maintains fully searchable audit logs of all API calls.
-
Apply semantic data protection and active defense
Traditional DLP fails when AI transforms content. AI Guardrails come into play here, and you want to make sure you are integrating seamlessly with your advanced DLP to be able to inspect the multi-stage intent and semantic meaning behind every prompt and response. The goal here is to prevent sensitive data, source code, and intellectual property from leaking, regardless of the channel it travels through, or the ways in which it has been rewritten to try to evade basic word triggers. AI guardrails also actively block sophisticated linguistic exploits, such as prompt injections and malicious jailbreaks in real time.
-
Implement real-time coaching
Instead of relying strictly on real-time blocking, which can frustrate users and drive them toward unsecured shadow AI, introduce a supportive, human touch with automated user coaching. Right now everyone is in experimentation and learning mode and when a user attempts to paste sensitive data into an unapproved CLI tool, or connect an MCP server to a confidential data set, rather than just blocking, look at this as a data security learning opportunity. Real-time pop-up messages explain policy-driven hurdles and nudge them toward safe behaviors and corporate AI governance processes.
AI has long arms and many tentacles; and those tentacles can feel like slippery little suckers (to steal the words of Julia Roberts). It can feel like a constant barrage of internal threat but in reality you do not have to completely lock down your environment and eschew AI to be secure. By deploying a unified, data-centric security architecture, you can safely embrace AI across web, CLI, and third-party integrations, ensuring that your data remains under your absolute control.
Explore the Netskope One AI Security platform (including NG-SWG, CASB, Agentic Broker, AI Gateway and AI Guardrails) today to learn how you can confidently secure your entire AI ecosystem.