The moment AI stops waiting for a human prompt is the moment traditional security controls stop working.
Most organizations have spent the past two years trying to work out how to secure what they could see: employees querying ChatGPT, developers copy-pasting sensitive code into Copilot, and oversharing into public AI tools. These were real risks, and the security industry responded. But a new more complex challenge has emerged, one that doesn’t involve a human typing a prompt at all: agentic AI.
Agentic AI, which takes autonomous actions such as calling APIs and executing code without human oversight, is the operational future of enterprise AI. However, current enterprise security is ill-equipped to govern these machine-to-machine agentic interactions at scale. These systems were built for human-initiated traffic, creating a widening security blind spot for CISOs as more AI agents are deployed.
To understand what’s at stake, you need to understand Model Context Protocol (MCP). Introduced by Anthropic in late 2024, MCP is rapidly becoming the standard protocol that connects AI agents to the tools, data sources, and services they need to operate. Think of it as the connective tissue of the agentic AI ecosystem, and it is the reason agentic AI can become a huge vulnerability in data security. (If you’d like to learn more, read our blog about MCP risks.)
Moving beyond visibility alone
To govern agentic AI deployments, the first order of business is knowing what’s actually happening. You need to decode what your agents are doing.
That’s the foundational value of Netskope One Agentic Broker.
The Agentic Broker sits in the traffic path between AI agents and the MCP servers, tools, and data sources they communicate with. It decodes MCP traffic in real time, identifying active agents, remote servers, tool requests, initialization strings, and session responses, turning what was previously invisible non-human traffic into a rich, searchable log of every agentic interaction in your environment.
What does that look like in practice? Security teams can see which AI tools are being invoked by which agents, what data is being requested and returned, whether agents are accessing resources they should not have access to, and whether anything in the traffic pattern looks anomalous—a potential indicator of tool poisoning or unauthorized lateral movement by an agent.
Beyond runtime visibility, organizations can now assess thousands of public MCP servers, in a similar way to the manner in which they use the Netskope Cloud Confidence Index (CCI) to evaluate the risk posture of cloud apps. They can evaluate protocol versions, encryption types, authentication methods, and known risk attributes before approving any public MCP server for use in their agentic workflows. This is supply chain risk management for AI, done at scale.
When it comes to enforcement, the Netskope One Agentic Broker enables real-time access control policies that can block unauthorized MCP communications, enforce least-privilege principles for AI agents, and apply Netskope’s integrated DLP to prevent sensitive data (passwords, intellectual property, customer records, etc.) from being exfiltrated through tool responses or agent outputs. Every interaction is logged and every policy violation is auditable.
Watch a demo of the Netskope One Agentic Broker ここに.
What happens if the traffic never hits the cloud?
This is where things get more complex. Many enterprises, particularly those operating in regulated industries or jurisdictions with data sovereignty mandates, are not sending their AI traffic to the cloud. Instead, they’re building private agentic environments within their own infrastructure. They want the power of AI without sensitive data ever leaving their own environment.
While that decision is sound, it also creates a security gap that most teams haven’t fully reckoned with. If agents are running entirely within private infrastructure, calling internally hosted custom LLMs or public LLM inference providers directly, accessing high-value internal databases, or automating business-critical workflows, then a cloud-based security proxy will never see that traffic. And if you can’t see it, you can’t protect it.
Netskope One AI Gateway is purpose-built to close this gap. It extends Netskope’s security capabilities directly into privately hosted AI environments, providing the same visibility and runtime controls that security teams rely on for public AI, but now inside the four walls of your own infrastructure.
The Netskope One AI Gateway deploys as a lightweight virtual appliance directly within your private environment, whether that’s AWS, VMware ESXi, or another hosted infrastructure, making it possible to intercept and govern API traffic between internal applications, autonomous agents, and privately hosted LLMs/public LLM inference providers, without routing that traffic through any external security service.
Once deployed, the Netskope One AI Gateway becomes a single, centralized control plane for everything flowing between your apps and your models, centrally managed through the single Netskope management console.
A unified security posture for the full agentic ecosystem
For nearly a decade, Netskope has been building the infrastructure to understand and secure data moving through complex, cloud-native environments. The same data context, the same policy intelligence, and the same integrated DLP and threat protection that enterprises rely on to secure SaaS applications, cloud workloads, and web traffic now extends natively into the agentic AI layer. The pairing of Agentic Broker and AI Gateway is significant not just for what each product does individually, but also for what they enable together.
Netskope One Agentic Broker secures public-facing agentic interactions, starting with MCP client-server protocol communications—the traffic flowing between AI clients and external or third-party MCP servers. It brings visibility, risk scoring, real-time access controls, and integrated DLP to the open ecosystem of agentic tools and data sources your teams are adopting. Netskope One AI Gateway extends those protections into private AI environments, including the internally hosted models, the proprietary data sources, and the autonomous agent workflows that never touch the public internet but carry your most sensitive operational data.
Together, they deliver what the modern enterprise actually needs: a consistent, unified security posture that stretches from public AI tools to private AI infrastructure, from human-initiated interactions to fully autonomous agent-to-agent workflows. For more details on how to protect your organization against emerging AI risks, read our AI Security Playbook and visit our website to learn more about Netskope One Agentic Broker and Netskope One AI Gateway.
















ブログを読む