
The rapid rise of generative AI (genAI) applications is reshaping enterprise technology strategies, pushing security leaders to reevaluate risk, compliance, and data governance policies. The latest surge in DeepSeek usage is a wake-up call for CISOs, illustrating how quickly new genAI tools can infiltrate the enterprise.
In only 48 hours, Netskope Threat Labs observed a staggering 1,052% increase in DeepSeek usage across our customer base. With 48% of enterprises seeing some level of activity, this adoption spike highlights the need for robust AI security controls.
DeepSeek’s adoption spike: A familiar pattern in genAI trends
This rapid uptick isn’t unprecedented. Similar patterns emerged with ChatGPT, Google Gemini, and other AI-powered applications. The adoption curve often follows an initial spike driven by curiosity, peaking before tapering off as organizations implement security controls or employees move on to the next trend.
Netskope Threat Labs took a look into regional DeepSeek adoption trends from January 24 to January 29 and observed the following regional increases:
Region | Observed increase from January 24-29 |
---|---|
United States | +415% |
Europe | +1,256% |
Asia | +2,459% |
These numbers reflect the speed at which genAI tools spread globally, often outpacing enterprise security teams’ ability to react.
Understanding the risks: Data privacy, compliance, and shadow AI
The widespread adoption of DeepSeek raises crucial security concerns, such as:
- Data privacy and governance: Employees may unknowingly input sensitive data into AI tools, exposing intellectual property and regulatory-protected information.
- Compliance and regulatory risks: Uncontrolled AI adoption can lead to non-compliance with GDPR, CCPA, and industry-specific regulations.
- Shadow AI in the enterprise: Without visibility and governance, unsanctioned AI applications can create security blind spots.
Adding to these concerns, the Netskope AI Labs team recently