Over the past year, enterprises have continued to struggle with how employees use generative AI tools. Much like the early days of SaaS and cloud platforms, many workers began experimenting with AI apps on their own, usually by signing in with personal accounts long before IT or security teams deploy company-approved genAI tools among their workforce. This pattern has given rise to what is now commonly called shadow AI, the AI usage that occurs outside organizational visibility, policy, and control.
Even with the rapid push toward enterprise licensing and governance frameworks, unregulated access is still widespread. Internal monitoring across organizations shows that a substantial share of employees are relying on tools such as ChatGPT, Google Gemini, and Copilot using credentials not associated with their organization. The good news is that this behavior is shifting in the right direction. Personal account usage has dropped significantly over the past year, as the percentage of AI users who use personal AI apps fell from 78% to 47%. In parallel, the percentage of people using organization-managed accounts has climbed from 25% to 62%, signaling that more companies are standardizing AI access and maturing their oversight. However, there is a growing overlap here of people who are switching back and forth between personal and enterprise accounts, growing from 4% of users to 9% of users. This overlap indicates that enterprises still have work to do to provide the levels of convenience or features that users desire. The shift toward managed accounts is encouraging, yet it also highlights how quickly employee behavior can outpace governance. Organizations that want to reduce exposure will need clearer policies, better provisioning, and ongoing visibility into how AI tools are actually being used across the workforce.

While the shift from personal accounts to organization-managed AI accounts is an encouraging one, organizations are also grappling with a different challenge—the total number of people using any SaaS genAI applications is growing exponentially, tripling over the past year in the average organization. What makes this trend particularly notable is that it is occurring despite increased controls and governance around managed genAI applications. This suggests that employee demand and reliance on genAI capabilities continue to accelerate faster than organizational guardrails can be implemented.

While the number of users tripled on average, the amount of data being sent to SaaS genAI apps grew sixfold, from 3,000 to 18,000 prompts per month. Meanwhile, the top 25% of organizations are sending more than 70,000 prompts per month, and the top 1% (not pictured) are sending more than 1.4 million prompts per month. In the next section, we explore the risks that accompany this increasing flow of data into SaaS genAI apps.

Over the past year, several genAI applications have emerged as mainstays across various regions and industries. ChatGPT registered adoption at 77%, followed by Google Gemini at 69%. Microsoft 365 Copilot reached 52% adoption, showing strong interest in AI features integrated into everyday workplace environments. Beyond these leading tools, organizations also made extensive use of various specialized and embedded AI applications tailored to operational, analytical, and workflow-driven needs.

The chart below shows how the adoption of the top genAI applications has shifted over the past year across regions and industries. ChatGPT maintained consistently high usage, averaging 77% throughout the year. Google Gemini demonstrated strong upward momentum, rising from 46% to 69%, indicating a growing trend of organizations using multiple SaaS genAI services with overlapping functionality. Microsoft 365 Copilot reached 52% adoption, supported by its integration into the Microsoft 365 product ecosystem. Perplexity also experienced steady growth, increasing from 23% to 35%, likely driven by the rising popularity of the Comet browser and its streamlined search-focused AI workflow. Notably, Grok, previously one of the most frequently blocked genAI applications, began to gain traction in April, with usage climbing to 28% as more organizations experimented with its capabilities despite earlier restrictions.

The rapid and decentralized adoption of generative SaaS AI tools will fundamentally reshape the cloud security landscape in 2026. We expect to see two major shifts: the continued exponential growth in the use of genAI across business functions and the dethroning of ChatGPT by the Gemini ecosystem as the most popular SaaS genAI platform. At the current rate, Gemini is poised to surpass ChatGPT in the first half of 2026, reflecting the intense competition and rapid innovation in the space. Organizations will struggle to maintain data governance as sensitive information flows freely into unapproved AI ecosystems, leading to an increase in accidental data exposure and compliance risks. Attackers, conversely, will exploit this fragmented environment, leveraging AI to conduct hyperefficient reconnaissance and craft highly customized attacks targeting proprietary models and training data. The need to balance AI-driven innovation with security will necessitate a shift towards AI-aware data protection policies and a centralized visibility layer that can monitor and control the use of genAI across all SaaS applications, making the enforcement of fine-grained, context-aware access controls and ethical guardrails a critical security priority for the coming year.