Netskope is recognized as a Leader again in the Gartner® Magic Quadrant™ for SASE Platforms. Get the Report

close
close
Your Network of Tomorrow
Your Network of Tomorrow
Plan your path toward a faster, more secure, and more resilient network designed for the applications and users that you support.
          Experience Netskope
          Get Hands-on With the Netskope Platform
          Here's your chance to experience the Netskope One single-cloud platform first-hand. Sign up for self-paced, hands-on labs, join us for monthly live product demos, take a free test drive of Netskope Private Access, or join us for a live, instructor-led workshops.
            A Leader in SSE. Now a Leader in Single-Vendor SASE.
            Netskope is recognized as a Leader Furthest in Vision for both SSE and SASE Platforms
            2X a Leader in the Gartner® Magic Quadrant for SASE Platforms
            One unified platform built for your journey
              Securing Generative AI for Dummies
              Securing Generative AI for Dummies
              Learn how your organization can balance the innovative potential of generative AI with robust data security practices.
                Modern data loss prevention (DLP) for Dummies eBook
                Modern Data Loss Prevention (DLP) for Dummies
                Get tips and tricks for transitioning to a cloud-delivered DLP.
                  Modern SD-WAN for SASE Dummies Book
                  Modern SD-WAN for SASE Dummies
                  Stop playing catch up with your networking architecture
                    Understanding where the risk lies
                    Advanced Analytics transforms the way security operations teams apply data-driven insights to implement better policies. With Advanced Analytics, you can identify trends, zero in on areas of concern and use the data to take action.
                        The 6 Most Compelling Use Cases for Complete Legacy VPN Replacement
                        The 6 Most Compelling Use Cases for Complete Legacy VPN Replacement
                        Netskope One Private Access is the only solution that allows you to retire your VPN for good.
                          Colgate-Palmolive Safeguards its "Intellectual Property” with Smart and Adaptable Data Protection
                          Colgate-Palmolive Safeguards its "Intellectual Property” with Smart and Adaptable Data Protection
                            Netskope GovCloud
                            Netskope achieves FedRAMP High Authorization
                            Choose Netskope GovCloud to accelerate your agency’s transformation.
                              Let's Do Great Things Together
                              Netskope’s partner-centric go-to-market strategy enables our partners to maximize their growth and profitability while transforming enterprise security.
                                ""
                                Netskope Cloud Exchange
                                Netskope Cloud Exchange (CE) provides customers with powerful integration tools to leverage investments across their security posture.
                                  Netskope Technical Support
                                  Netskope Technical Support
                                  Our qualified support engineers are located worldwide and have diverse backgrounds in cloud security, networking, virtualization, content delivery, and software development, ensuring timely and quality technical assistance
                                    Netskope video
                                    Netskope Training
                                    Netskope training will help you become a cloud security expert. We are here to help you secure your digital transformation journey and make the most of your cloud, web, and private applications.

                                      GenAI adoption is surging, with SaaS AI apps, genAI platforms, on-premises AI infrastructure, and now custom AI agents becoming enterprise mainstays. The newest trends, including on-premises AI infrastructure, genAI platforms, and custom agents, introduce additional data and security risks to the organization while also largely remaining under the radar of security teams. This report will help you shed light on the shadow AI infrastructure in your organization, enabling you to start mitigating those risks.

                                      18 min read

                                      Introduction link link

                                      This is our fourth Netskope Cloud and Threat Report dedicated to the emerging field of generative AI. Our first report in 2023 highlighted the exponential growth in popularity of ChatGPT within the enterprise. Our second report in 2024 highlighted that nearly all enterprises were using SaaS genAI apps and implementing policies to protect sensitive data. Our most recent report continued that narrative while introducing concepts of shadow AI, indirect genAI use, and local AI. This newest installment examines emerging trends in shadow AI and agentic AI within the enterprise, where some users are shifting toward genAI platforms and on-premises solutions as they build custom apps and autonomous agents, creating a new set of cybersecurity challenges.

                                      This report focuses on shadow AI and agentic AI, exploring how organizations can shed light on the shadows along the way. We begin by examining SaaS genAI apps, where we are still seeing a considerable amount of shadow AI. There is a clear centralization trend emerging, with organizations gravitating toward a few key enterprise-managed ecosystems, especially Gemini and Copilot. However, the majority of users (60%) are still using personal, unmanaged apps, representing a significant amount of shadow AI that continues to span into new apps as they are released.

                                      We continue with genAI platforms, such as Azure OpenAI, Amazon Bedrock, and Google Vertex AI, which are rapidly gaining popularity due to their simplicity, flexibility, customizability, and scalability. They empower users to build custom applications or agents using the models of their choosing. And perhaps most importantly, they provide some security and privacy guarantees that differentiate them from many SaaS apps and on-prem solutions. At this stage, adoption of genAI platforms is led by individuals experimenting with the relatively new technology, making genAI platforms represent the fastest-growing category of shadow AI.

                                      Running AI infrastructure on-premises is also a growing trend, where users are installing tools like Ollama to provide an interface to a wide variety of models or using frameworks like LangChain to build custom agents, creating new on-premises shadow AI infrastructure. On-premises AI is growing at a slower pace than genAI platforms, as the latter are more accessible and offer better off-the-shelf security, as we will explore later in this report.

                                      We conclude the report by examining the area of AI agents, which are systems that use AI to achieve a specific goal with minimal human intervention. This is an area where we are now starting to see significant activity in the enterprise, with users exploring a variety of agent frameworks, both on-premises and in the cloud, and building up even more shadow AI that is tightly intertwined with enterprise data and critical workflows.

                                       

                                      Highlights link link

                                      • Shadow AI represents the majority of AI use in the enterprise, driven by individual adoption of SaaS AI apps, AI platforms, on-premises AI deployments, and now custom AI agents.
                                      • SaaS AI use continues to grow rapidly within the average organization, with 50% more people interacting with AI apps and 6.5% more data being uploaded to SaaS AI apps over the past three months, averaging 8.2 GB per organization per month.
                                      • ChatGPT experienced its first decline in enterprise popularity since we began tracking it in 2023, as SaaS AI use consolidates around purpose-built solutions, such as Gemini and Copilot, that are well-integrated with existing enterprise workflows.
                                      • AI platforms (e.g., Amazon Bedrock, Azure OpenAI, and Google Vertex AI) are rapidly gaining popularity, enabling users to create custom apps and agents that interact directly with enterprise data stores, presenting new shadow AI challenges for enterprises to address.
                                      • AI agents are being developed, tested, and deployed on-premises using agent frameworks like LangChain, creating a particularly challenging new type of shadow AI, as on-premises deployments are typically the most difficult to discover and secure.

                                       

                                      Definitions link link

                                      • Agent frameworks are software libraries and tools that simplify the creation of autonomous AI agents by providing pre-built components for planning, memory, and tool integration (e.g., LangChain, OpenAI Agent Framework).
                                      • Agentic AI refers to systems where an agent can autonomously plan and execute a series of actions to accomplish a high-level goal, completing complex tasks without step-by-step human guidance.
                                      • GenAI platforms are managed cloud services that provide the foundational models, tools, and infrastructure to build, customize, train, and deploy AI models, applications, and agents (e.g., Amazon Bedrock, Azure OpenAI, Google Vertex AI).
                                      • LLM interfaces are front-end applications that enable users to interact with LLMs and are typically used on-premises (e.g., Ollama, LM Studio).
                                        SaaS genAI apps are purpose-built, cloud-hosted applications that use genAI as a primary feature to create new content or summarize existing content (e.g., ChatGPT, Gemini, Copilot).

                                       

                                      Continuing SaaS genAI app growth link link

                                      SaaS genAI app adoption continues to skyrocket in the enterprise. While the percentage of organizations using SaaS genAI apps has plateaued, with 89% of organizations actively using at least one SaaS genAI app, SaaS genAI adoption growth continues to manifest itself in multiple ways within the enterprise. First, the number of people using SaaS genAI apps within each organization increased by more than 50% with an average of 7.6% of people in each organization using SaaS genAI apps in May, compared to 5% in February (as shown in the figure below). The figure also shows the first and third quartiles, with the third quartile highlighting that 25% of organizations have more than one-quarter (25.6%) of their user population actively using SaaS genAI apps. In the 90th percentile (not pictured), at least 47% of the user population in those organizations is using SaaS genAI apps.

                                      Diagram showing GenAI users per month median percentage with shaded area showing 1st and 3rd quartiles

                                      Second, the number of genAI apps in use continues to grow, reaching an average of 7 per organization, up from 5.6 in February. We saw similar growth in the third quartile, where the organizations are now using 15.4 apps, up from 13.3 in February. We predicted this growth in our February report because, as aggressive investment in AI startups translates to the release of many new SaaS AI apps, creating even more shadow AI use that needs to be discovered and secured. Today, Netskope is tracking more than 1,550 distinct generative AI SaaS applications, up from just 317 in February, indicating the rapid pace at which new apps are being released and adopted in enterprise environments.

                                      Diagram showing GenAI apps per organization median with shaded area showing 1st and 3rd quartiles

                                      The third way in which the growth of SaaS genAI apps manifests itself in the enterprise is in the amount of data flowing into these apps. For the average organization, the amount of data uploaded each month has increased 6.5% from 7.7 GB to 8.2 GB over the past three months. At the 75th percentile, this increase was even more significant, from 20 GB to 22.8 GB (a 14% increase). At the 90th percentile, the pattern continues, with a 15% increase from 46 GB to 53 GB. At the current rate, we expect the 90th percentile to exceed 100 GB in Q3 2026. Even in organizations that are already seeing a significant amount of data being uploaded to SaaS genAI apps, the rapid growth continues with no signs of slowing down. As covered in our previous Generative AI Report, the data users are uploading to genAI apps includes intellectual property, regulated data, source code, and secrets, underscoring the importance of identifying shadow SaaS genAI use and implementing controls to prevent unwanted data leaks.

                                      Another noteworthy change that has occurred over the past four months is a decrease in the number of organizations using ChatGPT. Since its introduction in November 2022, the percentage of organizations using ChatGPT has never decreased. In February, we reported that nearly 80% of organizations were using ChatGPT, which has now fallen modestly to 78%. This decrease comes as Gemini and Copilot (Microsoft Copilot, Microsoft 365 Copilot, GitHub Copilot) continue to gain traction, thanks to their seamless integration into the Google and Microsoft product ecosystems that are already ubiquitous in the enterprise. ChatGPT was the only one of the top 10 apps to see a decrease since February, as shown in the figure below. Other top 10 apps, including Anthropic Claude, Perplexity AI, Grammarly, and Gamma, all saw enterprise adoption gains.

                                      Diagram showing the most popular apps by percentage of organizations

                                      Another noteworthy change since our last report is that Grok is rapidly gaining popularity, entering the top 10 for the first time in May. Interestingly, Grok is now simultaneously in the top 10 for most-used apps (pictured above) and also in the top 10 for the most-blocked apps (pictured below). Compared to February, fewer organizations are blocking Grok and are instead allowing it for specific (usually personal) use cases. The number of organizations blocking Grok peaked in April and is trending downward as the number of Grok users continues to rise. This comes as organizations are opting for more granular controls, using DLP and real-time user coaching to prevent any sensitive data from being sent to Grok. That said, the blocks still outnumber the allows, with 25% of organizations blocking all attempts to use Grok in May, while only 8.5% organizations are seeing some Grok use. This is not an unusual trend, as organizations tend to block new apps initially while they perform security reviews and implement controls to restrict their use. On the other hand, there are some SaaS genAI apps, such as DeepSeek, that remain heavily blocked and therefore do not see any significant enterprise use.

                                      Diagram of most blocked AI apps by percentage of organizations enacting a blanket ban on the app

                                      Shadow AI is a relatively new term that describes the use of AI solutions without the knowledge or approval of centralized IT and cybersecurity departments. In the early days, nearly 100% of SaaS genAI use was shadow AI. Over time, organizations began to review and approve specific enterprise solutions (typically ChatGPT, Gemini, or Copilot), and users transitioned to those approved solutions. Real-time coaching policies that remind users who are using an unapproved solution to switch to an approved solution have been instrumental in this transition. Those controls continue to be effective, with only 60% of the enterprise population using personal SaaS genAI apps in May, a 12 percentage point decrease since February. We expect this trend to continue in the coming months, with the rate dipping below 40% by the end of the year. At the same time, new shadow AI challenges (genAI platforms, on-premises genAI, and AI agents) have emerged, which we will explore in more detail in the following sections.

                                      Diagram showing GenAI usage personal vs. organization account breakdown

                                      For readers interested in discovering the extent of SaaS shadow AI use in their environments, Netskope is tracking more than 1,550 distinct generative AI SaaS applications. Netskope customers can identify shadow AI by searching for any app activity categorized as “Generative AI” and focusing on unapproved apps and personal logins. While blocking unapproved apps can be an effective strategy, user coaching policies are a more nuanced way to guide users away from unapproved solutions and toward those managed and approved by the company. Such policies are often coupled with DLP policies to mitigate risks of sensitive data leaking to unapproved SaaS genAI apps, as detailed in our previous Cloud and Threat Report.

                                       

                                      Increasing adoption of genAI platforms link link

                                      While SaaS genAI apps gained popularity for their ease of use, genAI platforms are now gaining popularity due to their flexibility and privacy benefits. Their flexibility stems from the fact that you can quickly and easily deploy the models of your choice using a single interface. Their privacy benefits arise from the fact that you host the models yourself and do not share any of your data with a third party. At the same time, the shared responsibility model in a genAI platform shifts more security responsibility to the user compared to a SaaS solution.

                                      The use of genAI platforms is skyrocketing. We have seen a 50% increase in the number of users and a 73% increase in network traffic on these platforms over the past three months. As of May, 41% of organizations are using at least one genAI platform, while 14% are using at least two, and 2.7% are using at least three. One of the reasons behind the proliferation of multiple solutions is that the adoption of genAI platforms is another shadow AI problem; individuals are choosing whichever frameworks they are most familiar with or that seem best suited to their specific use cases. Using a genAI platform is also nearly as accessible as using a SaaS genAI app, since the major cloud service providers all have their own offerings. The most popular genAI platform is Microsoft Azure OpenAI, used by 29% of organizations. Amazon Bedrock follows closely behind at 22%, with Google Vertex AI in a distant third place at 7.2%. All three platforms are gaining popularity as more users become familiar with them and explore the opportunities they offer.

                                      Diagram showing cloud AI framework adoption by percentage of organizations

                                      The rapid growth of shadow AI places the onus on the organization to identify who is creating cloud genAI infrastructure using genAI platforms, where they are building it, and whether they are following best practices for AI security. One of the powerful features of these genAI platforms is that they enable direct connection of enterprise data stores to AI applications, necessitating additional reviews and monitoring to ensure that these applications do not compromise enterprise data security. Netskope customers can gain insights into who is using these tools and how they are using them by reviewing their logs for any of these genAI platforms by name. Knowing who is using them and how they are using them is the first step to ensuring their secure use.

                                       

                                      Increasing adoption of on-premises genAI link link

                                      Another practice gaining popularity is the use of genAI on-premises. On-premises genAI use takes multiple forms, ranging from using on-premises GPU resources to train or host models to developing on-premises tools that interact with SaaS genAI applications or genAI platforms. Using genAI locally is a good way for organizations to leverage their existing GPU resource investment or to build tools that interact with on-premises systems and datasets. However, on-premises deployment also means the organization is solely responsible for the security of its genAI infrastructure. Furthermore, understanding and applying frameworks such as the OWASP Top 10 for Large Language Model Applications or Mitre Atlas is now essential.

                                      One of the most popular ways to use genAI locally is to deploy an LLM Interface. Similar to genAI platforms, LLM interfaces enable interaction with various models using a single interface. LLM interfaces are not as widely used as genAI platforms, with only 34% of organizations using LLM interfaces compared to 41% using genAI platforms. Ollama is the most popular framework by a large margin, and also serves as an excellent example of one of the key security differences between working on-prem vs. in a genAI platform: Ollama does not include any built-in authentication. So, if you are using Ollama, you must deploy it behind a reverse proxy or private access solution with appropriate authentication mechanisms in place to protect against unauthorized use. Furthermore, whereas the genAI platforms typically provide an AI guard to protect against the abuse of the models themselves, those using frameworks like Ollama must take extra steps to prevent abuse and misuse. Compared to Ollama, other LLM interfaces, including LM Studio and Ramalama, have a much smaller enterprise user base.

                                      Diagram showing the top LLM interfaces by percentage of organizations

                                      Due to the heightened security concerns surrounding the use of AI interfaces, organizations should proactively work to identify who is using them and where they are being used. Netskope customers can identify the use of popular LLM interfaces by their User-Agent strings in the transaction logs. For example, the top three interfaces’ User-Agent strings begin with ollama, LM Studio, and llama-cpp.

                                      Another way to discover who is experimenting with AI tools is to monitor access to AI marketplaces, such as Hugging Face. Hugging Face is a very popular community for sharing AI tools, AI models, and datasets. Users are downloading resources from Hugging Face in 67% of organizations. While the population of users doing so is small (only 0.3% on average), identifying these users is crucial for uncovering shadow AI, as they may be deploying AI infrastructure on-premises or in the cloud. Furthermore, because Hugging Face is a community platform designed to facilitate sharing, organizations must be aware of the supply chain risks associated with resources downloaded from this platform. In addition to the obvious risks of malicious code embedded in tooling, certain high-risk file formats (such as Python Pickles, which are vulnerable to dangerous arbitrary code execution attacks) are commonly shared on Hugging Face. Netskope customers can identify Hugging Face users by searching for the “Hugging Face” app in their logs. They should verify that resources downloaded from Hugging Face are covered by their Threat Protection policies.

                                       

                                      AI Agents link link

                                      While SaaS genAI app use is consolidating around purpose-built apps that are well-integrated with existing enterprise workflows (i.e., Gemini and Copilot), another trend is emerging, spanning SaaS AI apps, genAI platforms, and on-premises AI use: AI agents. An AI agent is a system tasked with a specific goal and given some limited autonomy to achieve that goal without requiring user interaction. Thanks primarily to advancements in foundation models, we are now starting to see a critical mass of users across multiple organizations building AI agents and using agentic features of SaaS solutions. For example, GitHub Copilot (used in 39% of organizations) offers an agent mode, where you can provide a coding task, and it will iteratively modify your code and test it until it believes it has achieved the goal (or reached another stopping criterion). There are two noteworthy characteristics of an AI agent:

                                      1. It has access to some data that belongs to the organization.
                                      2. It can execute some actions autonomously.

                                      In the case of GitHub Copilot, it has access to your source code and can execute the necessary commands to compile and run that code within your infrastructure. These two characteristics of AI agents underscore the importance of ensuring that their use is adequately secured to protect sensitive data and infrastructure.

                                      One option for creating AI agents is to use one of the many available agent frameworks. In total, 5.5% of organizations have users running agents created using popular AI agent frameworks on-premises. Among those frameworks, LangChain is the most popular by a large margin (having been around since October 2022), while the OpenAI Agent Framework is rapidly gaining popularity (having just been released in March 2025). While there are many other frameworks available, none of them has yet seen significant enterprise adoption. Pydantic-ai, in third place, is used in 3 out of every 1000 organizations. On-premises AI agents pose a significant shadow AI risk because they are highly accessible (easy to build and run), often have access to sensitive data, and can execute code autonomously. Organizations should proactively work to identify users who create and use AI agents on-premises.

                                      Diagram showing the top agent frameworks by percentage of organizations

                                      While the agent runs on-premises, the actual AI models that underpin the agent can run anywhere, including in SaaS, genAI platforms, or on-premises environments. When on-premises agents access SaaS services, they typically access different API endpoints than the browser. For example, conversations with OpenAI’s ChatGPT in the browser will go to chatgpt.com, while programmatic interaction with OpenAI’s models will go to api.openai.com. The figure below shows the top SaaS AI app API domains by the percentage of organizations using them. The data interpretation is as follows: 66% of organizations have users making API calls to api.openai.com, indicating some non-browser interaction with OpenAI services. This interaction could be from third-party tools, custom tools, or AI agents. OpenAI has a significant lead over other SaaS services in this regard, and given the rapid growth in popularity of the OpenAI Agent Framework, we anticipate this trend will intensify in the coming months.

                                      Diagram showing the top 10 SaaS AI API domains by percentage of organizations

                                      Netskope customers can identify who is using the popular agent frameworks by searching their logs for the relevant User-Agent strings. For example, the top three frameworks have User-Agent strings beginning with langchain, Agents/Python, and pydantic-ai. They can find non-browser interaction with SaaS AI applications by searching their transaction logs for the relevant domains. The User-Agent strings and the process names will provide clues as to the nature of the interaction. More broadly, Netskope customers should look for any GenAI interactions coming from outside the web browser.

                                      Another option for building AI agents is to use one of the genAI platforms. For this discussion, we will focus specifically on Amazon Bedrock, as it provides distinct service endpoints for managing models, managing agents, making inference requests to models, and invoking agents. Earlier in this report, we reported that 22% of organizations are using Amazon Bedrock. When we break this number down further, we find that all 22% are using it for managing models and running inference against models, while 14% are using it to develop, deploy, or invoke agents. We also stated earlier in this section that 5.5% of organizations are running AI agents on-premises, which means that 2.5x more organizations are using Amazon Bedrock for AI agents compared to on-premises agents.

                                      Why are frameworks like Bedrock more popular for agentic AI than on-premises frameworks?

                                      • They streamline development, especially for those already invested in the AWS ecosystem.
                                      • They reduce operational overhead by being a managed service.
                                      • They come with the built-in scalability of a mature IaaS platform.

                                      In other words, a stable, managed environment with built-in security and support is more appealing to organizations than having to handle all these aspects independently. Based on current trends, we expect this divide to widen, with an even greater number of organizations favoring genAI platforms over on-premises solutions in the coming months. On-premises solutions will still make sense to some organizations, especially those with specific high-volume, predictable, continuous, and long-term use cases where on-premises might be more cost-effective. And, of course, anyone within the organization may start developing and using agents on premises during development or for personal use, underscoring the need for organizations to monitor continuously for shadow AI.

                                      Netskope users looking to discover who is using Amazon Bedrock to deploy or invoke agents should search their transaction logs for the Bedrock build-time domains (bedrock-agent.*.amazonaws.com) and the Bedrock run-time domains (bedrock-agent-runtime.*.amazonaws.com).

                                      A CISO perspective link link

                                      To effectively manage the data security risks posed by the increasing use of generative AI technologies, CISOs and security leaders should implement the following actionable guidance:

                                      1. Assess your genAI landscape:
                                        • Identify usage: Determine which SaaS genAI applications, genAI platforms, and locally hosted genAI tools are in use across your organization.
                                        • User identification: Pinpoint who is using these tools and how they are being leveraged within different departments and workflows.
                                      2. Bolster genAI app controls:
                                        • Approved apps: Establish and enforce a policy that only allows the use of company-approved genAI applications.
                                        • Block unapproved apps: Implement robust blocking mechanisms to prevent the use of unapproved genAI apps.
                                        • DLP for sensitive data: Use data loss prevention (DLP) policies to prevent sensitive data from being shared with unauthorized genAI applications.
                                        • Real-time user coaching: Deploy real-time user coaching to guide users towards approved solutions and educate them on secure genAI practices. Regularly benchmark these controls against industry best practices.
                                      3. Inventory local genAI infrastructure:
                                      4. Continuous monitoring and awareness:
                                        • Monitor genAI use: Implement continuous monitoring of genAI use within your organization to detect new shadow AI instances, whether through SaaS apps, genAI platforms, or on-premises deployments.
                                        • Stay informed: Stay updated on new developments in AI ethics, regulatory changes, and adversarial attacks to adjust your security posture proactively.
                                      5. Understanding and proposing the risk of agentic shadow AI:
                                        • Unverified employee: Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place.
                                        • Set guidance: Establish guidelines for the organization and its members. Use the detections outlined in this report to identify those who are leading the charge in the adoption of agentic AI and partner with them to develop an actionable and realistic policy, standard, or guideline.

                                      By taking these proactive steps, organizations can effectively manage the evolving challenges presented by genAI technologies and ensure their safe and responsible adoption.

                                       

                                      Netskope Threat Labs link link

                                      Staffed by the industry’s foremost cloud threat and malware researchers, Netskope Threat Labs discovers, analyzes, and designs defenses against the latest cloud threats affecting enterprises. Our researchers are regular presenters and volunteers at top security conferences, including DefCon, BlackHat, and RSA.

                                       

                                      About This Report link link

                                      Netskope provides threat protection to millions of users worldwide. Information presented in this report is based on anonymized usage data collected by the Netskope One platform relating to a subset of Netskope customers with prior authorization.

                                      This report contains information about detections raised by the Netskope One Next Generation Secure Web Gateway (NG-SWG), not considering the significance of the impact of each individual threat. The statistics in this report are based on the period from February 1, 2025, through May 31, 2025. Stats reflect attacker tactics, user behavior, and organization policy.