
As we continue asking our many Netskope subject matter experts about the year to come, this week saw us knocking on the door of the Netskope Threat Labs. We asked them how they expect AI to change the threat landscape in 2026. Here’s what they had to say:
Privacy-first AI deployments
Ray Canzanese: “The escalating deployment of sophisticated AI, particularly for tasks involving sensitive or proprietary data, will drive a significant shift away from a software-as-a-service (SaaS) model toward more privacy-protecting and sovereign deployments. Organizations in regulated industries—including finance and healthcare—and those with significant intellectual property will intensify their move to frameworks like Amazon Bedrock to ensure data remains within their own secure perimeters and is never used for model training by the provider. This focus on data sovereignty, IP protection, and compliance with regulations like GDPR and HIPAA will push a new class of “secure-by-design” AI adoption, where control over the data’s location and usage becomes the primary factor, even if it introduces slightly more complexity than a traditional SaaS offering.”
AI-driven vulnerability discovery
Gianpietro Cutolo: “AI-driven security application static testing (SAST) tools will redefine code security, detecting logic and architectural flaws that traditional scanners overlook. These tools are rapidly becoming indispensable for pen testers and DevSecOps teams, automating code review and vulnerability discovery. Simultaneously, however, the offensive potential is equally significant, demonstrated by the fact that an AI agent now holds the top rank on HackerOne in the US, signaling a future where both defenders and attackers leverage the same intelligent tooling to outpace each other.”
The rise of autonomous agentic phishing campaigns
Jan Michael Alcantara: “Social engineering attacks have surged this year as AI has made it easier for attackers to create convincing phishing emails, deepfake videos, and realistic phishing websites. In 2026, we may see autonomous adversary agentic AI capable of running entire phishing campaigns. They could independently research and profile potential targets, conduct reconnaissance, craft personalized lures and payloads, and even deploy and manage C2 infrastructure. This advancement would further lower the technical barriers for launching sophisticated attacks, allowing more threat actors to participate.”
OAuth will be a weak link with AI integrations
Gianpietro Cutolo: “Since attackers began to exploit OAuth and third-party app tokens in the recent Salesforce and Salesloft incidents, the same threat pattern is now emerging in AI ecosystems. As AI agents and MCP-based systems increasingly integrate with third-party APIs and cloud services, they inherit OAuth’s weakest links: over-permissive scopes, unclear revocation policies, and hidden data-sharing paths. These integrations will become prime targets for supply-chain and data-exfiltration attacks, where compromised connectors or poisoned tools allow adversaries to silently pivot across trusted AI platforms and enterprise environments.”

Read the blog