SaaS genAI app adoption continues to skyrocket in the enterprise. While the percentage of organizations using SaaS genAI apps has plateaued, with 89% of organizations actively using at least one SaaS genAI app, SaaS genAI adoption growth continues to manifest itself in multiple ways within the enterprise. First, the number of people using SaaS genAI apps within each organization increased by more than 50% with an average of 7.6% of people in each organization using SaaS genAI apps in May, compared to 5% in February (as shown in the figure below). The figure also shows the first and third quartiles, with the third quartile highlighting that 25% of organizations have more than one-quarter (25.6%) of their user population actively using SaaS genAI apps. In the 90th percentile (not pictured), at least 47% of the user population in those organizations is using SaaS genAI apps.

Second, the number of genAI apps in use continues to grow, reaching an average of 7 per organization, up from 5.6 in February. We saw similar growth in the third quartile, where the organizations are now using 15.4 apps, up from 13.3 in February. We predicted this growth in our February report because, as aggressive investment in AI startups translates to the release of many new SaaS AI apps, creating even more shadow AI use that needs to be discovered and secured. Today, Netskope is tracking more than 1,550 distinct generative AI SaaS applications, up from just 317 in February, indicating the rapid pace at which new apps are being released and adopted in enterprise environments.

The third way in which the growth of SaaS genAI apps manifests itself in the enterprise is in the amount of data flowing into these apps. For the average organization, the amount of data uploaded each month has increased 6.5% from 7.7 GB to 8.2 GB over the past three months. At the 75th percentile, this increase was even more significant, from 20 GB to 22.8 GB (a 14% increase). At the 90th percentile, the pattern continues, with a 15% increase from 46 GB to 53 GB. At the current rate, we expect the 90th percentile to exceed 100 GB in Q3 2026. Even in organizations that are already seeing a significant amount of data being uploaded to SaaS genAI apps, the rapid growth continues with no signs of slowing down. As covered in our previous Generative AI Report, the data users are uploading to genAI apps includes intellectual property, regulated data, source code, and secrets, underscoring the importance of identifying shadow SaaS genAI use and implementing controls to prevent unwanted data leaks.
Another noteworthy change that has occurred over the past four months is a decrease in the number of organizations using ChatGPT. Since its introduction in November 2022, the percentage of organizations using ChatGPT has never decreased. In February, we reported that nearly 80% of organizations were using ChatGPT, which has now fallen modestly to 78%. This decrease comes as Gemini and Copilot (Microsoft Copilot, Microsoft 365 Copilot, GitHub Copilot) continue to gain traction, thanks to their seamless integration into the Google and Microsoft product ecosystems that are already ubiquitous in the enterprise. ChatGPT was the only one of the top 10 apps to see a decrease since February, as shown in the figure below. Other top 10 apps, including Anthropic Claude, Perplexity AI, Grammarly, and Gamma, all saw enterprise adoption gains.

Another noteworthy change since our last report is that Grok is rapidly gaining popularity, entering the top 10 for the first time in May. Interestingly, Grok is now simultaneously in the top 10 for most-used apps (pictured above) and also in the top 10 for the most-blocked apps (pictured below). Compared to February, fewer organizations are blocking Grok and are instead allowing it for specific (usually personal) use cases. The number of organizations blocking Grok peaked in April and is trending downward as the number of Grok users continues to rise. This comes as organizations are opting for more granular controls, using DLP and real-time user coaching to prevent any sensitive data from being sent to Grok. That said, the blocks still outnumber the allows, with 25% of organizations blocking all attempts to use Grok in May, while only 8.5% organizations are seeing some Grok use. This is not an unusual trend, as organizations tend to block new apps initially while they perform security reviews and implement controls to restrict their use. On the other hand, there are some SaaS genAI apps, such as DeepSeek, that remain heavily blocked and therefore do not see any significant enterprise use.

Shadow AI is a relatively new term that describes the use of AI solutions without the knowledge or approval of centralized IT and cybersecurity departments. In the early days, nearly 100% of SaaS genAI use was shadow AI. Over time, organizations began to review and approve specific enterprise solutions (typically ChatGPT, Gemini, or Copilot), and users transitioned to those approved solutions. Real-time coaching policies that remind users who are using an unapproved solution to switch to an approved solution have been instrumental in this transition. Those controls continue to be effective, with only 60% of the enterprise population using personal SaaS genAI apps in May, a 12 percentage point decrease since February. We expect this trend to continue in the coming months, with the rate dipping below 40% by the end of the year. At the same time, new shadow AI challenges (genAI platforms, on-premises genAI, and AI agents) have emerged, which we will explore in more detail in the following sections.

For readers interested in discovering the extent of SaaS shadow AI use in their environments, Netskope is tracking more than 1,550 distinct generative AI SaaS applications. Netskope customers can identify shadow AI by searching for any app activity categorized as “Generative AI” and focusing on unapproved apps and personal logins. While blocking unapproved apps can be an effective strategy, user coaching policies are a more nuanced way to guide users away from unapproved solutions and toward those managed and approved by the company. Such policies are often coupled with DLP policies to mitigate risks of sensitive data leaking to unapproved SaaS genAI apps, as detailed in our previous Cloud and Threat Report.