Grok is a chatbot developed by Elon Musk’s xAI. It was initially released to select individuals in November 2023 and became generally available to all X (formerly Twitter) users in December 2024. With the release of Grok-3 in February, Grok’s popularity rose rapidly. However, that rise was short-lived, and its user base in the enterprise has plateaued. At the same time, many organizations took a defensive stance to block the new app pending their own security and AI governance reviews. This blog post quantifies Grok’s rise in popularity, summarizes how organizations have responded, and provides recommendations for organizations looking to limit its use.
Grok popularity
Before the release of Grok-3, Grok had a very small user base in the enterprise. Only 2.6% of organizations had anyone using the chatbot, and only 2 out of every 10,000 (0.02%) of people in those organizations used the chatbot monthly. The fanfare surrounding the release of Grok-3 drew enough attention to the app that today we see Grok being used in 23% of organizations. The steepest rise occurred in February and appears to have since plateaued. The following figure illustrates the sigmoidal growth trend observed during the past six months.
A closer examination of the number of active Grok users reveals a more telling trend. The number of enterprise users accessing Grok peaked in March, when the average organization had 5 out of every 1,000 (0.5%) users accessing Grok. Since then, the user base has shrunk from its peak and continues to fall. By the end of May, it had dropped to less than 4 out of every 1,000 (0.4%) users. The following figure highlights this peak-and-decline trend, illustrating that the decline is likely to continue in the coming months.
Together, the two above trends tell a story of a new genAI app with significant marketing muscle entering an already crowded market (as of today, Netskope is tracking more than 564 genAI apps). Its marketing efforts helped spark some initial interest, which was enough to inspire users worldwide to try it out. However, in the four months since the release of Grok-3, it has failed to gain significant traction in the enterprise. By comparison, 82% of organizations are currently using ChatGPT, with an average of 8.1% of users in those organizations actively using the chatbot each month (and at least 25% of users actively use the chatbot each month in 25% of organizations).
Blocking Grok
Organizations tend to lean toward a policy of “block first and ask questions later” for new genAI apps. Netskope Threat Labs shared a case study of such behavior earlier this year in our Generative AI Report: The marketing fanfare surrounding the release of DeepSeek’s R1 model created worldwide interest in the chatbot. In response, 75% of organizations put policies in place to block the DeepSeek app. Ultimately, interest in the app waned among enterprise users, and Netskope no longer sees significant use of the DeepSeek app. While not nearly as dramatic, Grok has experienced a similar trend. As users in more organizations tried out the new chatbot, the number of organizations actively blocking it increased. In April, that number peaked at nearly 30% before falling to 29% in May. This trend suggests that we may have reached a plateau as more organizations adopt more nuanced and less draconian controls, or as interest continues to fade. The figure below, combined with the two figures above, appears to suggest that the policies restricting access to Grok in the enterprise may be limiting its adoption.
Whenever we present a global average, the natural follow-up question is whether there are any regional differences. The following chart shows minimal variation among the regions worldwide, with North America currently having the highest block rates and Europe having the lowest. Block rates in North America appear to still be on the rise, while block rates in other regions have plateaued.
While block policies are standard this early for a new app, an even larger percentage of organizations have less restrictive policies in place, including:
- Blocking some (but not all) users from accessing Grok.
- Leveraging real-time user coaching to direct users away from Grok and toward approved alternatives.
- Using DLP to prevent sensitive data from being sent to Grok.
Unlike the percentage of organizations with block policies, which appears to have reached a plateau, this trend is on the rise. At least 61% of organizations have such policies in place as of the end of May, as shown in the figure below. The figure suggests these rates are likely to continue in the coming months.
Predictions
It is still too early to predict the ultimate fate of Grok. Will it overcome this initial plateau and become a mainstay of the enterprise world? Will it go the way of DeepSeek? What we do know is that the top 10 most popular genAI apps in the enterprise today fit one or more of the following three categories:
- They were the first app in the genAI space (ChatGPT)
- They are owned by Microsoft or Google and heavily integrated into their products (Gemini, GitHub Copilot, Microsoft Copilot, Microsoft 365 Copilot)
- They solve specific enterprise challenges (Grammarly – writing, Anthropic Claude – coding, Perplexity – research, Gamma – presentations, Otter.ai – transcription).
Given this information, Grok’s path to success in the enterprise appears to be tightly coupled to X/Twitter’s success in the enterprise (more to come on that front in a future post).
Recommendations
Netskope Threat Labs recommends that you have a policy in place to block new genAI apps when they are released, to buy enough time for security and AI governance reviews. After review, enterprises should implement policies to restrict each app to its approved use cases. Three common patterns are:
- Use Netskope’s real-time user coaching to remind users of company policy whenever they interact with an unapproved app or an app with a narrow use case.
- Restrict the use of an app to only those user groups that have a valid business use case.
- Use Netskope’s DLP policies to ensure that no sensitive information (such as regulated data, intellectual property, source code, or secrets) is sent to unapproved apps.
To learn more, we suggest reviewing our latest Generative AI Report.