close
close
Your Network of Tomorrow
Your Network of Tomorrow
Plan your path toward a faster, more secure, and more resilient network designed for the applications and users that you support.
          Experience Netskope
          Get Hands-on With the Netskope Platform
          Here's your chance to experience the Netskope One single-cloud platform first-hand. Sign up for self-paced, hands-on labs, join us for monthly live product demos, take a free test drive of Netskope Private Access, or join us for a live, instructor-led workshops.
            A Leader in SSE. Now a Leader in Single-Vendor SASE.
            A Leader in SSE. Now a Leader in Single-Vendor SASE.
            Netskope debuts as a Leader in the Gartner® Magic Quadrant™ for Single-Vendor SASE
              Securing Generative AI for Dummies
              Securing Generative AI for Dummies
              Learn how your organization can balance the innovative potential of generative AI with robust data security practices.
                Modern data loss prevention (DLP) for Dummies eBook
                Modern Data Loss Prevention (DLP) for Dummies
                Get tips and tricks for transitioning to a cloud-delivered DLP.
                  Modern SD-WAN for SASE Dummies Book
                  Modern SD-WAN for SASE Dummies
                  Stop playing catch up with your networking architecture
                    Understanding where the risk lies
                    Advanced Analytics transforms the way security operations teams apply data-driven insights to implement better policies. With Advanced Analytics, you can identify trends, zero in on areas of concern and use the data to take action.
                        The 6 Most Compelling Use Cases for Complete Legacy VPN Replacement
                        The 6 Most Compelling Use Cases for Complete Legacy VPN Replacement
                        Netskope One Private Access is the only solution that allows you to retire your VPN for good.
                          Colgate-Palmolive Safeguards its "Intellectual Property” with Smart and Adaptable Data Protection
                          Colgate-Palmolive Safeguards its "Intellectual Property” with Smart and Adaptable Data Protection
                            Netskope GovCloud
                            Netskope achieves FedRAMP High Authorization
                            Choose Netskope GovCloud to accelerate your agency’s transformation.
                              Let's Do Great Things Together
                              Netskope’s partner-centric go-to-market strategy enables our partners to maximize their growth and profitability while transforming enterprise security.
                                Netskope solutions
                                Netskope Cloud Exchange
                                Netskope Cloud Exchange (CE) provides customers with powerful integration tools to leverage investments across their security posture.
                                  Netskope Technical Support
                                  Netskope Technical Support
                                  Our qualified support engineers are located worldwide and have diverse backgrounds in cloud security, networking, virtualization, content delivery, and software development, ensuring timely and quality technical assistance
                                    Netskope video
                                    Netskope Training
                                    Netskope training will help you become a cloud security expert. We are here to help you secure your digital transformation journey and make the most of your cloud, web, and private applications.

                                      Cloud and Threat Report:
                                      Generative AI 2025

                                      This report details the rise of Generative AI (GenAI) adoption, noting a significant increase in usage and data volume over the past year. While GenAI offers many benefits, it also introduces data security risks, primarily through shadow IT and the leakage of sensitive information; however, organizations can mitigate these risks by implementing robust controls, such as blocking, DLP, and real-time user coaching.
                                      Dark cloud over the sunset
                                      19 min read

                                      Introduction link link

                                      The 2025 Generative AI Cloud and Threat Report spotlights the growing adoption of genAI, the increasing risk that genAI represents, and the strategies organizations have adopted to reduce that risk. When we first published a Generative AI Cloud and Threat Report in 2023, genAI was still a nascent technology virtually synonymous with ChatGPT. Google’s recently released Bard (now Gemini) rapidly added users but was far from challenging ChatGPT’s dominance. Only 1 out of 100 enterprise users were using genAI apps. Fast forward to 2025: nearly 1 in 20 enterprise users are using genAI apps, and even more are indirectly using genAI or contributing data for training AI models. Netskope is currently tracking the use of 317 distinct genAI apps across our base of more than 3,500 customers.

                                      This report aims to provide a data-driven accounting of the top genAI trends, highlighting adoption, risk, and risk reduction. It begins by examining how pervasive genAI has become, highlights the data risks surrounding its adoption, and analyzes the different types of controls organizations use to reduce the risk. Given the timing of this report–a few short months after DeepSeek-V3’s release made waves regarding its reported cost and efficiency–it also includes a case study surrounding DeepSeek. The case study illustrates what happens when a new and intriguing genAI tool is released and highlights the best practices to reduce the associated risks.

                                      The remainder of this report provides a deeper dive into the following:

                                      • GenAI is a growing data security risk: The amount of data sent to genAI apps in prompts and uploads has increased more than 30-fold over the past year, increasing volumes of sensitive data exposure, especially source code, regulated data, intellectual property, and secrets.
                                      • GenAI is everywhere: While most organizations (90%) use genAI apps, even more (98%) use apps that incorporate genAI features. While genAI apps are used by a relatively small population (4.9% of users), the majority use apps incorporating genAI features (75% of users).
                                      • GenAI is shadow IT: Most genAI use in the enterprise (72%) is shadow IT, driven by individuals using personal accounts to access genAI apps.
                                        GenAI risk reduction is possible: Blocking, data loss prevention (DLP), and real-time user coaching are among the most popular controls for reducing genAI risk.

                                       

                                      test answer

                                      Generative AI data risks link link

                                      Data security is the primary risk that organizations face when their users adopt genAI apps. This risk arises from two of the most popular use cases for genAI in the enterprise:

                                      • Summarization: GenAI apps excel at summarizing large documents, large datasets, and source code, giving rise to the risk that individuals will send sensitive data to genAI apps for summarization.
                                      • Generation: GenAI apps excel at generating text, images, videos, and source code, giving rise to the risk that individuals working on sensitive projects will pass sensitive data to genAI apps to generate or enhance content.

                                      The primary source of the data risks surrounding their use is that summarization and generation use cases require the user to send data to the genAI apps to provide value. Compounding these risks are additional factors, such as the number of apps on the market and the proliferation of genAI apps as shadow IT within the enterprise, which we will cover in more detail later in this report. Organizations using Netskope to protect their sensitive data are typically concerned with four different types of sensitive data flowing into genAI apps:

                                      • Intellectual property: Intellectual property is leaked to genAI apps when users seek to analyze customer lists, contracts, and other documents containing trade secrets or confidential data that an organization wishes to protect.
                                      • Passwords and keys: Passwords and keys are often leaked to genAI apps when embedded in code snippets.
                                      • Regulated data: Regulated data includes highly sensitive personal, healthcare, and financial data and are most frequently leaked to genAI apps in sectors that work with such data, especially healthcare and financial services.
                                      • Source code: A popular use case for genAI apps is to help summarize, generate, or edit source code, leading to users inadvertently leaking sensitive source code to unapproved apps.

                                      The following figure shows how frequently these five types of data are sent to genAI apps in violation of organization policies, with source code accounting for nearly half of all violations, followed by regulated data, intellectual property, and passwords and keys. The remainder of this report targets security leaders in organizations concerned with such sensitive data. How can an organization safeguard its most sensitive data against unwanted exposure while enabling its users to leverage genAI apps?

                                      Cloud and Threat Report - Generative AI 2025 - Type of data policy violations for genAI apps

                                       

                                      Generative AI everywhere link link

                                      Generative AI is everywhere. In 2022, organizations’ main question was whether the benefits of allowing ChatGPT outweighed the risks. Today, Netskope is tracking 317 genAI apps, each prompting a slightly more nuanced series of questions: Which apps should we allow, and what controls should we put in place to mitigate data security risks for those apps? The following figure breaks down the percentage of organizations using each of the top 10 genAI apps, underscoring that most organizations have decided to allow multiple genAI apps.

                                      Cloud and Threat Report - Generative AI 2025 - Most popular genAI apps based on the percentage of orgs using those apps

                                      The following figure presents the popularity of the same 10 apps over the past year, illustrating the rapidly changing AI landscape and how many organizations are actively making such decisions. Even apps generally considered ubiquitous, like ChatGPT, are just now being introduced into some organizations for the first time. Newcomers like Microsoft 365 Copilot are currently in the rapid adoption phase, similar to what Microsoft Copilot saw in early 2024. Others, like Google Gemini, Anthropic Claude, GitHub Copilot, and Gamma, are gradually being introduced in new organizations. Google Gemini is slowly closing the gap with ChatGPT.

                                      Cloud and Threat Report - Generative AI 2025 - Most popular apps by percentage of organizations

                                      In addition to the decisions that organizations have to make about hundreds of genAI apps, organizations must also consider the hundreds of additional apps that now provide genAI-powered features. Examples of such apps include:

                                      • Gladly: A customer experience platform that uses AI to streamline customer communication.
                                      • Insider: A growth management platform using AI to analyze customer data and optimize granular messaging.
                                      • Lattice: HR software that uses AI to summarize employee data, offer writing assistance, and create custom onboarding videos personalized for new hires.
                                      • LinkedIn: A social network that uses genAI to help its users in content creation (posts, profiles, job descriptions, etc.) and leverages user data to train new models.
                                      • Moveworks: An IT support platform that uses AI for task automation, information retrieval, multilingual support, and AI-driven workflow optimization across business systems.

                                      The following figure shows that most organizations use genAI in some capacity today.

                                      • 90% of organizations have users directly accessing genAI apps like ChatGPT, Google Gemini, and GitHub Copilot.
                                      • 98% of organizations have users accessing apps that provide genAI-powered features, like Gladly, Insider, Lattice, LinkedIn, and Moveworks.

                                      Cloud and Threat Report - Generative AI 2025 - Percentage of orgs

                                      At the user level, these differences are even more pronounced. Although most organizations use genAI apps directly, the population of users within those organizations is still relatively small (4.9% of users). On the other hand, apps that incorporate genAI-powered features are much more commonplace (75% of users).

                                      Cloud and Threat Report - Generative AI 2025 - Average percentage of users

                                      The purpose of the preceding two figures is to underscore that genAI use in the enterprise is even more widespread than it may appear at first glance. Apps that indirectly incorporate genAI features carry the same risks as genAI apps. To help provide comprehensive visibility into generative AI risk, Netskope provides the Cloud Confidence Index, which tracks these and many other attributes for more than 82,000 cloud apps.

                                       

                                      Generative AI on the rise link link

                                      Generative AI adoption is rising in the enterprise according to many different measures. Still, none are as important from a data security perspective as the amount of data sent to genAI apps: Every post or upload is an opportunity for data exposure. This section highlights that the amount of data sent to genAI apps has increased more than 30-fold in the past year. That is 30 times the number of posts and uploads and 30 times the opportunities for sensitive data exposure than a year ago. Furthermore, we expect the data volume flowing into genAI apps to continue to increase at a similar rate throughout 2025.

                                      The growth in the amount of data sent to genAI apps far outpaces the increase in the number of genAI users and the number of genAI apps. While the user base for genAI apps will remain relatively small for the foreseeable future, the data risks will increase dramatically as the amount of genAI use increases among that population. The remainder of this section provides more detailed insights into the increased data volume, user count, and app count we observed over the past year.

                                      Data volume

                                      Although the population of genAI users is quite small relative to the total enterprise population, the amount of data these users send to genAI apps is increasing rapidly. Over the past year, the average amount of data sent to genAI apps each month (mainly in prompts and uploads) has increased more than 30-fold from just 250 MB per month to 7.7 GB. The top 25% of organizations saw a similar increase from more than 790 MB per month to more than 20 GB per month. The rapid increase in data volume sent to genAI apps significantly increases the data security risk. More data means more potential for sensitive information to be exposed or mishandled by these apps. We expect to see more than 15 GB sent to genAI apps on average each month by the end of 2025.

                                      Cloud and Threat Report - Generative AI 2025 - Data sent to genAI apps per org median data volume with shaded area showing 1st and 3rd quartiles

                                      User count

                                      While most organizations use genAI, a small but continually growing percentage of users are actively using genAI Apps. The number of people using genAI apps in the enterprise has nearly doubled over the past year, with an average of 4.9% of people in each organization using genAI apps, as shown in the figure below. Active use in the context of genAI means sending prompts to chatbots or otherwise meaningfully interacting with the app. The top 25% of organizations have at least 17% of their user population actively using genAI apps, while the top 10% have more than one-third (35%) of their users actively using genAI apps. We expect this trend to continue throughout 2025.

                                      Cloud and Threat Report - Generative AI 2025 - GenAI users per month median percentage with shaded area showing 1st and 3rd quartiles

                                      App count

                                      The number of genAI apps each organization uses continues to rise gradually, and there are now nearly six different genAI apps in use on average. The top 25% of organizations use at least 13 apps, and the top 1% (not pictured) use at least 40. As Netskope is tracking 317 distinct AI apps in total and aggressive investment in AI startups continues, we expect the number of apps each organization uses to continue to rise throughout 2025. The rise in the number of apps underscores the importance of implementing controls to limit the risks associated with new apps. Later in this report, we provide a detailed case study of DeepSeek-R1 to show how various organizations responded during the rapid adoption of the new chatbot.

                                      Cloud and Threat Report - Generative AI 2025 - GenAI apps per organization median with shaded area showing 1st and 3rd quartiles

                                       

                                      Shadow IT/Shadow AI link link

                                      GenAI app adoption in the enterprise has followed the typical pattern of new cloud services: individual users using personal accounts to access the app. The result is that the majority of genAI app use in the enterprise can be classified as shadow IT, a term used to describe solutions being used without the knowledge or approval of the IT department. A newer term, shadow AI, was coined specifically for the special case of AI solutions. The term “shadow“ in shadow IT and shadow AI is meant to evoke the idea that the apps are hidden, unofficial, and operating outside of standard processes. Even today, more than two years after the release of ChatGPT kicked off the genAI craze, the majority (72%) of genAI users are still using personal accounts to access ChatGPT, Google Gemini, Grammarly, and other popular genAI apps at work.

                                      The following figure shows how many individuals have used personal genAI accounts at work over the past year. This number has gradually decreased from 82% a year ago to 72% today. While this is an encouraging trend, there is still a long way to go before most users have switched to accounts managed by their organization. At the current rate, most users will still be using personal accounts through 2026. A small fraction of users (5.4%) use a combination of personal accounts and organization-managed accounts, indicating that even in organizations where company-managed apps exist, many users are still using personal accounts.

                                      Cloud and Threat Report - Generative AI 2025 - GenAI usage personal vs. organization account breakdown

                                      Because most genAI users use personal accounts, the distribution of data policy violations for personal accounts is not substantively different from the broader distribution. The following figure shows the distribution, with source code violations being the most common, followed by regulated data, intellectual property, and passwords and keys.

                                      Cloud and Threat Report - Generative AI 2025 - Type of data policy violations for personal genAI apps

                                       

                                      DeepSeek: A case study in early generative AI adoption link link

                                      On January 20, 2025, the Chinese company DeepSeek released its first chatbot based on its DeepSeek-R1 model. DeepSeek claims to have trained the model at significantly less expense than OpenAI-o1 and with substantially less computing power than Llama-3.1. These claims raised questions about whether there was now a lower barrier to entry in the field of genAI, whether DeepSeek-R1 would accelerate genAI innovation, and whether DeepSeek-R1 would disrupt the competitive landscape. These questions resulted in significant media coverage and piqued the interest of genAI enthusiasts worldwide.

                                      The following figure shows the percentage of organizations where people used or attempted to use DeepSeek in the weeks after the DeepSeek-R1 release. At its peak, 91% of organizations worldwide had users trying to access DeepSeek, with 75% blocking all access, 8% using granular access control policies, and 8% allowing all access. Those 75% of organizations blocking access had preemptive policies to block genAI apps that the security organization had not approved. Preemptive block policies successfully mitigated the data security risks of new users trying out DeepSeek. In the weeks following, interest waned, and the percentage of organizations with DeepSeek users declined week-over-week.

                                      Cloud and Threat Report - Generative AI 2025 - Percentage of orgs with DeepSeek users per week

                                      The following figure shows an even more granular view, looking at the daily number of users across the platform and breaking down who was blocked and allowed. At the peak, 0.16% of users in the average organization attempted to use DeepSeek and were blocked, while 0.002% of users were allowed. After the initial peak, the number of users trying to use DeepSeek declined as interest waned and users changed their behavior in response to the blocks.

                                      The plot also shows that while 91% of organizations had users attempting to use DeepSeek, the total user population was quite small. Limited early adoption is typical for new technologies. The low population of early adopters means the risks of putting preemptive block policies in place will have little negative impact on the business.

                                      Cloud and Threat Report - Generative AI 2025 - Median percentage of people using DeepSeek

                                      This DeepSeek case study presents critical lessons for organizations that aim to reduce their data security risks surrounding newly released AI apps:

                                      • Early adopters of new genAI apps are present in nearly every (91%) organization. Early adopters pose a considerable security risk when they send sensitive data to these apps.
                                      • Most organizations have adopted a “block first and ask questions later” policy for new genAI apps. Instead of playing whack-a-mole in blocking new apps their users might try out, they explicitly allow certain apps and block all others. This type of policy is excellent for risk reduction, as it gives the organization visibility into who is trying the app and allows them time to complete a proper review. Because the population of early adopters is tiny, potential negative impacts on the business are limited.

                                       

                                      Local generative AI adoption link link

                                      One way to manage the data risks created by cloud-based genAI apps is to run the genAI infrastructure locally. Running locally is becoming more accessible, made possible by organizations like DeepSeek and Meta that have made their models available for download, by tools such as Ollama and LM Studio that provide tooling to enable running models locally, and communities such as Hugging Face that facilitate model and data sharing. Some organizations even train their own models, use retrieval-augmented generation (RAG) to combine genAI and information retrieval, or build their own tooling around existing models.

                                      Over the past year, the number of organizations locally running genAI infrastructure has increased dramatically, from less than 1% to 54%. Most of the growth came in the first half of 2024 and has since leveled off. We expect this trend to continue with only modest gains in the coming year. As expected, the user population running local genAI infrastructure is quite small, at less than 0.1% of users on average. Among this user population, Ollama is the most popular tool to enable running models locally, and Hugging Face is the most popular place to download models, tools, and other resources.

                                      Cloud and Threat Report - Generative AI 2025 - Organizations using genAI locally

                                      Shifting from using genAI apps to local genAI models changes the risk landscape, introducing several additional risks. The OWASP Top 10 for Large Language Model Applications provides a framework for thinking about such risks, which include:

                                      • Supply chain: Can you trust all the tools and models you use?
                                      • Data leakage: Does your system expose sensitive information from training, connected data sources, or other users?
                                      • Improper output handling: Are any systems processing genAI outputs doing so safely?

                                      Mitre Atlas is another framework for considering AI risks. It provides a granular view of attacks against AI systems. Those running self-managed genAI systems must also consider these attacks, which include:

                                      • Prompt injection: Can adversaries craft prompts to cause the model to provide unwanted outputs?
                                      • Jailbreaks: Can adversaries bypass controls, restrictions, and guardrails?
                                        Meta prompt extraction: Can adversaries reveal details of the system’s inner workings?

                                      In other words, shifting from cloud-based genAI apps to locally hosted genAI models reduces the risk of unwanted data exposure to a third party but introduces multiple additional risks. Training your own models or using RAG further expands those risks. However, what we are seeing so far is not a trend away from using cloud services. As we highlighted earlier in this report, the number of people using genAI cloud apps, the number of apps in use, and the amount of data being sent to these apps are all increasing. Instead, we are seeing the addition of locally-hosted genAI infrastructure on top of the cloud-based genAI apps already in use. Local hosting of genAI models represents an additive risk.

                                       

                                      Generative AI risk reduction link link

                                      More than 99% of organizations are enforcing policies to reduce the risks associated with genAI apps. These policies include blocking all or most genAI apps for all users, controlling which specific user populations can use genAI apps, and controlling the data allowed into genAI apps. The following sections break down the particular policies that are most popular.

                                      Blocking

                                      Blocking is the most straightforward strategy for reducing risk and, therefore, the most popular. The challenge with blocking is two-fold. First, blocking can impact the business by limiting user productivity. Second, blocking can drive users to get creative, such as using personal devices or personal mobile networks to evade blocks. While 83% of organizations use block policies for some apps, the scope of these policies is targeted. The following figure shows the number of apps actively blocked (this means that there is a policy to block the app, and users are attempting to use the app anyway) across the entire organization (this means that nobody in the organization is allowed to use it) is gradually increasing, with four apps on average blocked today. The top 25% of organizations block at least 20 apps, while the top 1% (not pictured) block more than 100 apps.

                                      Cloud and Threat Report - Generative AI 2025 - Number of apps blocked per org median with shaded area showing 1st and 3rd quartiles

                                      The breakdown of the 10 most commonly blocked apps in the figure below provides insight into organization-wide block strategies. The most popular apps (ChatGPT, Gemini, Copilot) do not appear on this list, and those apps that do appear on this list have many alternatives on the market. In the case where there are many alternatives, organizations use block policies to drive their users toward specific approved apps and away from unapproved apps. In some cases, like Stable Diffusion (an image generator), the app might not serve any legitimate business purpose.

                                      Cloud and Threat Report - Generative AI 2025 - Most blocked AI Apps by percentage of organizations enacting a blanket ban on the app

                                      Real-time user coaching

                                      The previous section mentioned this notion of steering users away from certain apps and toward others using block policies. Real-time user coaching offers a more nuanced alternative to the block policy, one that can remind users that a specific app is not approved for handling sensitive data but allows the end user to decide whether or not to use it. Real-time user coaching is effective because it empowers the user to make the right decision in the moment. It also helps to shape user behavior by providing immediate feedback and guidance.

                                      Coaching, a more nuanced policy than blocking, is less prevalent but still growing. 35% of organizations use real-time coaching to reduce genAI risks. We expect that percentage to increase to more than 40% in the coming year. Organizations often pair coaching with other policies, such as data loss prevention policies.

                                      Cloud and Threat Report - Generative AI 2025 - Percentage of organizations using real time user coaching to control genAI app access

                                      Data loss prevention

                                      Data loss prevention (DLP) reduces the risks associated with genAI apps by inspecting prompts and data sent to genAI apps in real time. DLP can decide whether to allow or block the content based on rules configured by administrators. Intellectual property, secrets, regulated data, and source code are the four most common types of data organizations restrict from being shared with genAI apps. Pictured below, DLP policies for genAI grew in popularity at the beginning of 2025 and have recently leveled off, with 47% of organizations using DLP to control genAI app access.

                                      Cloud and Threat Report - Generative AI 2025 - Percentage of organizations using DLP to control genAI app access

                                       

                                      A CISO perspective link link

                                      CISOs and security leaders reading this should be deeply concerned about the data security risks associated with the rapidly growing use of genAI technologies. Cloud-based genAI apps and locally hosted genAI models introduce a growing risk of unwanted data exposure and a new set of security vulnerabilities.

                                      Organizations must take immediate action to manage these risks effectively. I encourage leaders to review the NIST AI Risk Management Framework, which provides a valuable roadmap for organizations to govern, map, measure, and manage genAI risks.

                                      Specific steps that organizations can take include the following:

                                      1. Assess your genAI landscape: Understand which genAI apps and locally hosted genAI infrastructure you are using, who is using them, and how they are being used.
                                      2. Bolster your genAI app controls: Regularly review and benchmark your controls against best practices, such as allowing only approved apps, blocking unapproved apps, using DLP to prevent sensitive data from being shared with unauthorized apps, and leveraging real-time user coaching.
                                      3. Inventory your local controls: If you are running any genAI infrastructure locally, review relevant frameworks such as the OWASP Top 10 for Large Language Model Applications, NIST AI Risk Management Framework, and Mitre Atlas to ensure adequate protection of data, users, and networks.
                                        It is crucial to continually monitor the use of genAI within your organization and stay informed of new developments in AI ethics, regulatory changes, and adversarial attacks.

                                      All cyber and risk leaders should prioritize AI governance and risk programs to address the evolving challenges posed by genAI technologies and their adoption through shadow AI practices. By taking proactive steps to manage these risks, we can ensure the safe and responsible adoption and use of genAI for the benefit of your organization.

                                       

                                      Netskope Threat Labs link link

                                      Staffed by the industry’s foremost cloud threat and malware researchers, Netskope Threat Labs discovers, analyzes, and designs defenses against the latest cloud threats affecting enterprises. Our researchers are regular presenters and volunteers at top security conferences, including DefCon, BlackHat, and RSA.

                                       

                                      About This Report link link

                                      Netskope provides threat protection to millions of users worldwide. Information presented in this report is based on anonymized usage data collected by the Netskope One platform relating to a subset of Netskope customers with prior authorization.

                                      This report contains information about detections raised by Netskope’s Next Generation Secure Web Gateway (SWG), not considering the significance of the impact of each individual threat. The statistics in this report are based on the period from February 1, 2024, through February 28, 2025. Stats reflect attacker tactics, user behavior, and organization policy.

                                      Cloud and Threat Reports

                                      The Netskope Cloud and Threat Report delivers unique insights into the adoption of cloud applications, changes in the cloud-enabled threat landscape, and the risks to enterprise data.

                                      Storm with lightning over the city at night

                                      Accelerate your cloud, data, AI, and network security program with Netskope