Get the report: How to Achieve CIO-CEO Alignment in the Era of AI

close
close
Your Network of Tomorrow
Your Network of Tomorrow
Plan your path toward a faster, more secure, and more resilient network designed for the applications and users that you support.
Experience Netskope
Get Hands-on With the Netskope Platform
Here's your chance to experience the Netskope One single-cloud platform first-hand. Sign up for self-paced, hands-on labs, join us for monthly live product demos, take a free test drive of Netskope Private Access, or join us for a live, instructor-led workshops.
A Leader in SSE. Now a Leader in Single-Vendor SASE.
Netskope is recognized as a Leader Furthest in Vision for both SSE and SASE Platforms
2X a Leader in the Gartner® Magic Quadrant for SASE Platforms
One unified platform built for your journey
Securing Generative AI for Dummies
Securing Generative AI for Dummies
Learn how your organization can balance the innovative potential of generative AI with robust data security practices.
Modern data loss prevention (DLP) for Dummies eBook
Modern Data Loss Prevention (DLP) for Dummies
Get tips and tricks for transitioning to a cloud-delivered DLP.
Modern SD-WAN for SASE Dummies Book
Modern SD-WAN for SASE Dummies
Stop playing catch up with your networking architecture
Understanding where the risk lies
Advanced Analytics transforms the way security operations teams apply data-driven insights to implement better policies. With Advanced Analytics, you can identify trends, zero in on areas of concern and use the data to take action.
Netskope Technical Support
Netskope Technical Support
Our qualified support engineers are located worldwide and have diverse backgrounds in cloud security, networking, virtualization, content delivery, and software development, ensuring timely and quality technical assistance
Netskope video
Netskope Training
Netskope training will help you become a cloud security expert. We are here to help you secure your digital transformation journey and make the most of your cloud, web, and private applications.

Host Max Havey digs into the world of agentic AI-enabled threats and cyber espionage with guests Neil Thacker, Global Privacy and Data Protection Officer at Netskope, and Ray Canzanese, Head of Netskope Threat Labs. Together they discuss a recent report from Anthropic about the first-ever reported agentic AI-orchestrated cyber espionage campaign, exploring why this report lacked key technical evidence and the potential motivations behind them. They also dissect the notion that AI is making existing attack techniques—like phishing and social engineering—more accessible and faster, and debate the likelihood of another agentic AI-driven data breach in 2026.

I think the important reason to keep talking about it, especially right now, is to make sure that we are all in that mindset that some of our insiders are not necessarily people. And when you have non-person insiders, like non-humans doing stuff in your organization, oftentimes that looks different. That activity appears in different logs and different places, and you need to make sure that your strategy includes that and covers that. It’s not enough to say, well, of course I care about insiders.

Ray Canzanese, Head of Netskope Threat Labs
Ray-Canzanese


Timestamps

*(00:01): Introduction*(16:35): The importance of classifying threats as human or AI
*(01:35): Neil and Ray’s agentic AI predictions in 2026*(19:15): Drivers of future agentic breaches
*(04:05): DIscussion of Anthropic’s recent AI-enabled cyber espionage report*(23:53): Shifting cybersecurity conversations around AI
*(09:20): Important details to include for future reports of threats*(27:10): Advice for containing AI risks
*(11:40): Why AI companies emphasize nefarious uses by attackers*(30:16): Conclusion
*(13:07): Why threat actors get better outcomes with AI compared to enterprise initiatives

 

Other ways to listen:

On this episode

Ray Canzanese
Director, Netskope Threat Labs

chevron

Ray Canzanese

Ray is the Director of Netskope Threat Labs, which specializes in cloud-focused threat research. His background is in software anti-tamper, malware detection and classification, cloud security, sequential detection, and machine learning.

LinkedIn logo

Neil Thacker
Global Privacy & Data Protection Officer at Netskope

chevron

Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union GDPR.

LinkedIn logo

Max Haveyn
Senior Content Specialist at Netskope

chevron

Max Havey

Max Havey is a Senior Content Specialist for Netskope’s corporate communications team. He is a graduate from the University of Missouri’s School of Journalism with both Bachelor’s and Master’s in Magazine Journalism. Max has worked as a content writer for startups in the software and life insurance industries, as well as edited ghostwriting from across multiple industries.

LinkedIn logo

Ray Canzanese

Ray is the Director of Netskope Threat Labs, which specializes in cloud-focused threat research. His background is in software anti-tamper, malware detection and classification, cloud security, sequential detection, and machine learning.

LinkedIn logo

Neil Thacker is a veteran information security professional and a data protection and privacy expert well-versed in the European Union GDPR.

LinkedIn logo

Max Havey

Max Havey is a Senior Content Specialist for Netskope’s corporate communications team. He is a graduate from the University of Missouri’s School of Journalism with both Bachelor’s and Master’s in Magazine Journalism. Max has worked as a content writer for startups in the software and life insurance industries, as well as edited ghostwriting from across multiple industries.

LinkedIn logo

Episode transcript

Open for transcript

0:00:01 Max Havey: Hello and welcome to another edition of Security Visionaries, a podcast all about the world of cyber data and tech infrastructure, bringing together experts from around the world and across domains. I'm your host, Max Havey, and today we're going to do our best to untangle the hype from the reality when it comes to AI threats and cyber espionage. And with me, we've got two really great guests who both very much have thoughts around this subject. First up, we've got Ray Canzanese, head of Netskope Threat Labs. Ray, welcome back to the program.

0:00:29 Ray Canzanese: Hey, thanks for having me, max. It's been a while.

0:00:32 Max Havey: It has. It has. It's been maybe a couple of years, so glad to have you back on the pod and second up here, back again recently seeing on our Hackers episode, we've got Neil Thacker, global Privacy and data protection officer here at Netskope. Welcome back, Neil.

0:00:45 Neil Thacker: Thanks, max. Thanks for inviting me back the last time we were in the nineties. We're back in the 2020s.

0:00:51 Max Havey: I've brought you into the present tense. This time we'll be having a future looking conversation though. Awesome. There have been many recent conversations around AI enabled threats in cyber espionage. Most notably, a report from Anthropic about the first ever reported AI orchestrated cyber espionage campaign that some critics have noted lacks some key evidence in terms of support. But before we dig into that, I know both Ray and Neil have offered some thoughts about AI enabled threats in the future of ag agentic phishing and data breaches as we look ahead at 2026. So let's start there. Neil, can you take us through your recent prediction noting that 2026 will likely be the first year we see an AG agentic AI driven data breach?

0:01:35 Neil Thacker: Yeah, sure. I mean, I'm slightly annoyed Anthropic, I've highlighted this in 2025. I predicted 2026. But the reasons for this is I feel like we've reached an inflection point in terms of AI within cybersecurity. We are seeing lots and lots of organizations obviously developing, deploying AI technologies, enhancing their existing also capabilities here, but also using it for defensive capabilities too. So of course there is no, it was kind of a clear observation that in 2026 we'll see an offensive style attack here, and it's not anything new. We've had for many years, we've had technologies, we've had tools that attackers have used to launch attacks and of course growing in terms of sophistication. So for me, it was looking at the landscape where we were heading and realizing, well, this is going to come soon, and if it's not going to be in 2025, it'll be in 2026. But we're seeing this, lots of organizations are preparing for this arms race when it comes to ai, defense offense, and pretty much preparing for this type of outcome.

0:02:40 Max Havey: Certainly. And Ray, I know your team also put together a number of predictions around this. How do you sort of react to that and are there any predictions from your team that you'd want to note around this growing age Agentic concern?

0:02:53 Ray Canzanese: Yeah, I think similar to Neil, we had predictions on both sides of this, right? On one side, attackers are using the technology to amplify attacks or to streamline attacks. And the risks there are, number one, it gets a little bit faster and more accessible to develop an attack. And then number two is that it's really good at certain things. If I wanted to make a fake video of you Max trying to convince somebody to divulge their password or something else, that would be pretty easy to do with today's technology. And so we are predicting to see upticks in social engineering and phishing. And on the other end, as organizations adopt more of these technologies and maybe don't follow best practices, or maybe there's some emergent risks that nobody really thought about that increases risks of breaches happening there as well. So it's not just when we talk about AI security, it's not just about, well, let's keep it out of the hands of the hackers, but it's also, well, let's make sure when we use it, we use it safely and responsibly.

0:04:05 Max Havey: Very much kind of a double-edged sword there. It is the protection side, but also using this and enabling folks to use it in a way that is secure and isn't accidentally leaking stuff in the process. With all that in mind, going back to the philanthropic report, from your views, why was this report lacking so much of the usual and useful data that these sorts of reports include and that sort of enables others to learn from it when it comes to such a huge thing, the first ever AI enabled cyber espionage event like this,

0:04:39 Ray Canzanese: I think the short answer is because you don't need more details for marketing, for the marketing, you got to talk about how powerful your product was and you got to talk about how responsible you were being in wielding that power. So it makes sense from a marketing standpoint. And so if something like this happens and you are actually interrupting an attack, it makes sense. You would want to push that out immediately. It's going to make everybody working for you feel good, and you're going to have a bunch of people doing podcasts talking about it, really good marketing being maybe even a little bit more cynical. I think that if you are a first mover in a space where you've developed quite awesome technology and you go around saying, Hey, what we built here is really dangerous, so you really shouldn't let other people build what we built because you wouldn't trust them with it, but you can trust us because you can see we are here doing the right thing. So that's a little bit more cynical saying maybe it's a little bit more than marketing, maybe it's a little bit smart business savvy of talking about how maybe more companies should be allowed to do the same thing.

0:05:51 Max Havey: Certainly. Well, and I mean I think that's kind of the interesting thing there is that there's so much potential as it relates to ai and it's always been sort of seen as this black box for a lot of folks, especially for people who are a little bit more trepidatious about diving in head first with it like this. Do you guys have a hunch as to how accurate this sort of claim is? It is good for marketing, it's good for showing how powerful this technology can be, but how reliable do we think this is as an actual attack or something like this?

0:06:25 Neil Thacker: Yeah, I mean, it wasn't necessarily unique. I mean, if you look at the style of the attack that occurred, it was the same kind of pattern that many attackers have been following for years. So it's not anything novel. It's not anything unique. So I think again, from sharing this information, obviously this happened, but there's nothing amazing to learn from the type of attack anyway. I think again, the exciting part, the unique part is that this was fully orchestrated by an AI agent. So it is learned well from humans in terms of acting on these principles. And if you look at this, right, it follow the very similar pattern. It did the reconnaissance, it did the attack surface mapping, it went through, found vulnerabilities, it then did some credential harvesting and then went through the final stage of data collection. So we've been following that same pattern for years.

0:07:17 Neil Thacker: I mean, I guess the really exciting thing is if it would've found a new way of doing this without going through so many steps perhaps, but we're not quite there yet in terms of learning from this. So yeah, that's kind of how I see this. Again, I believe that this happened. I think, again, to Ray's point, it was, was it something that they gave us new information? No, it was again, a great writeup, a great story to tell, which by the way, storytelling is good. People love stories and learning about these things through stories, but again, it lacks some technical details, but I think that will be coming soon. We'll see more of these types of attacks and we'll see more information being shared on, again, why these attacks are perhaps unique and how they're learning. I also go back to 2016 when we saw this play out at Defcon as part of the cyber grand challenge, where we saw fully autonomous machines fighting against other machines in a game of capture the flag. And it was great because over the time that they were finding vulnerabilities, launching exploits, they learn as well. So this is something we have to be prepared for. That was 10 years ago, by the way, we saw the theory was possible 10 years ago. We're now coming through to the age of where these things are going to be happening more and more.

0:08:31 Ray Canzanese: And I think that's a good point, Neil, about we totally believe that they detected something, and I think one of the reasons is because a lot of us working in cybersecurity have been using these tools to do exactly that kind of stuff from a red team perspective. For a while we've known this is possible. So of course you would see more and more of it happen, and it seems in this case, they found somebody doing it and they believed they were doing it with ill intentions. Right. Was it a Chinese a PT group? I don't know. Right. There's no evidence that was put forth there, but did it happen? Almost certainly, yes, it happened. Right. So the questions there around maybe the details not around the concept and whether or not they found the abuse and stopped the abuse.

0:09:20 Max Havey: If there are future attacks like this that are reported, what are some of those key details that you would want to see in a report like this that would make this be a better thing that folks could learn from looking ahead?

0:09:34 Neil Thacker: Yeah, I think I would see the more technical detail, the details around how the attack, if it was successful, how it was successful, what was the target, what was the exploit, et cetera. One thing I did actually love from the report was use of new terms, which I don't know they have been used in the past, but there's been a focus on this, the speed of exploit, my new favorite phrase, the operational tempo of the attack. It was clear that they, again, through the research that they identified that this was not possible for a human to do, therefore it must be an AI agent doing this. So again, speed, fast, exploitation, the ability to use, again, additional types of vulnerability scanning and all these kind of things. So that's kind of what I'd be looking at. Also in future, there is a, I guess what we're talking about here, there is perhaps some unique details that we can start determining and identifying and putting some form of attribution, which is always difficult in terms of attacks. Again, can we identify this as being, again, human or machine led attack? Is that important? I mean, that's the other question we need to be asking now. We need to know if this is again, machine or human. So yeah, this is kind of where we need to start looking at this is what we need I guess in future reports, those ways of better identifying, again, the new tactics, techniques, et cetera, being used by, again, autonomous tools.

0:10:57 Ray Canzanese: One of the things I think when you write a report like this and you're trying to help others, you need to share enough details that other people in the same boat as you would be able to take that intel and do something about it. So in this case, if you believe that it was an A TP group attacking your service, you're not the only ones out there hosting LLMs and allowing people to interact with them. So there's a whole industry out there that would love to know what those TTPs are so they can start looking for the same on their services blocking certain behaviors. That's the kind of stuff that I think that you'll expect to see more and more in these types of reports.

0:11:40 Max Havey: Certainly. And it also sort of begs a question, seeing this report coming from such a big AI provider, and I think you maybe touched on this a little bit, Ray, but why does it seem like AI companies are so keen to suggest that AI is being so successfully used for nefarious reasons, for these sorts of attacks and things of that sort? Why is that kind of an important thing for them to be talking about, especially maybe in this sort of storytelling kind of frame?

0:12:06 Ray Canzanese: And I mean, I think I hinted at that quite heavily upfront where you're telling a story, you're really hyping up the ai, this AI stuff is so powerful, look at what it was able to do. That's a really good story to tell. And I mean, this is really on brand for philanthropic. It's really on brand for them to talk about the risks. Because you can also then, if you talk about the risks, talk about how you're mitigating them and build yourself up as maybe the leader in this space of thinking about the risks and the mitigations, that then helps the business, that helps the money flow your way.

0:12:48 Neil Thacker: If you think about this as the lion and the lion tamr, right? It's their role as the lion. They have this hugely powerful capability, and yet they're in full control or they have the ability to identify when it's perhaps being used in a certain way. So yeah, that's the kind of the analogy I refer to the lion and the lion tamer capability.

0:13:07 Max Havey: I find this interesting too because there's a data point that I saw from MIT recently that was noting that 95% of enterprise AI initiatives are failing to drive any real discernible value. And so why would threat actors be driving such better outcomes in that sort of thing? Are they just playing with it more, they finding new ways to break it? To some

0:13:26 Ray Canzanese: Extent, it's a finding a problem that LLMs are good at solving, and Neil made this point already that this was a pretty routine attack in terms of things that have been seen before over and over and over again, right? Alums are very good at that, right? Auto complete this for me. I'd like you to fill in all the details of what I need to do in this attack. And so that's part of it. Choose the right problem for the tool and that's the right problem to match this tool up with. So it makes sense that you would get a good outcome there. A lot of organizations are getting good outcomes when it comes to automating programming type tasks, which is what this is.

0:14:07 Neil Thacker: It's automating those tasks for speed. One thing we've seen definitely in cybersecurity and is how we measure responses to breaches, how organizations have done this for so long, it's how quickly you can respond to that breach to mitigate, to put in some control to ensure that the breach is not successful. So earlier in the attack chain, you ultimately, that's when you want to intervene. Now with, again looking at ai, it is designed to work in a fast and automated way that can scale as well. So this is what in many cases, if you're looking at the use cases for ai, this is one great use case. Unfortunately, therefore it impacts organizations because of that. Because again, we saw the same for in the early days of large scale DDoS attacks, those kinds of things. You could pretty much automate those script, those lots to target organizations do it that way. If you had humans doing this, it would be largely ineffective. But we're now seeing this in a more advanced way, targeting organizations using specific techniques, targeting data exfiltration, et cetera. This is where we're moving towards this is the now modern day level scalability of these types of attacks.

0:15:20 Ray Canzanese: I like that you use the words like bots and scripts because we're calling them agents now, but it's conceptually still the same thing. And the speed came with the bots and scripts, but you still needed people to write the bots and write the scripts. And I think the piece that's helping here is that, okay, well now we can write those bots and we can write those scripts faster and we can write them in a way that's targeted towards specific organizations because that's what the LMS enable us to do. So it's a little bit of accessibility mixed in with the speed. Here's somebody who maybe didn't have the know-how to write all those scripts themselves or didn't have the resources right to pull together a small team to do that in short order. But a bunch of AI agents, much easy toral to get to do that task.

0:16:09 Neil Thacker: And again, they can iterate quickly, so they'll learn, okay, not successful this time and then go again a minute later. Whereas again, perhaps again from a human perspective, it takes perhaps a few hours, a few days to work out, okay, why wasn't I successful with a, again, this was also proven 10 years ago. This is the thing that these systems can iterate very, very quickly and then move on. And the goal, of course, is to be successful in the attack. So if not successful, that time can iterate and move on quickly to perhaps be more successful next time.

0:16:43 Max Havey: Well, it's also interesting to hear over joinder that you guys kind keep coming back to is the notion that none of this is particularly new. It's all stuff that we've been kind of aware of for a time or techniques that have been happening for a bit now, and that AI has just sort of made it more accessible. It's taking some of the barriers out, for lack of a better term, for the bad guys, for your threat actors and those sorts of things. And do you think, looking ahead, it will be more and more important to be able to classify these things between an AI enabled threat versus a human threat? I think as you sort of pointed out, Neil, sort of the man or machine argument here, is that going to be an important classification to keep in mind for cybersecurity professionals looking ahead like this?

0:17:24 Neil Thacker: Yes, I think so. Well, because of I guess the types of attack that you may see, so an external party, I don't think it matters too much, but when it's inside your organization and it's your AI agent, then definitely you need to know that it was your AI agent that did this. So that's the other thing is that we've always treated attacks from an external threat actor or an internal threat actor perspective. For me, you definitely need to know when it's inside your organization. We've always, when we've investigated any, I've been in this industry 25 years, when you investigate in an internal incident, you need to know who that was, of course, because that person is inside your organization, external, you need to know it was an attack and you need to know how to defend against this. So I guess that's when you need to know an inside agent inside your organization. Yes, you need to know if you own that agent and respond accordingly.

0:18:17 Ray Canzanese: And I think that's a good answer because at some level, you don't care, right? If the insider is an agent or the insider as a person, it's still doing something bad and we still want it to stop. But I think the important reason to keep talking about it, especially right now, is to make sure that we are all in that mindset that some of our insiders are not necessarily people. And when you have non-person insiders, like non-humans doing stuff in your organization, oftentimes that looks different. That activity appears in different logs and different places, and you need to make sure that your strategy includes that and covers that. It's not enough to say, well, of course I care about insiders. We have to think about, well, what flavor would those insiders take? And am I looking in all the right places to detect those insiders and what they might be doing wrong?

0:19:15 Max Havey: Certainly. And I think with that sort of in mind, I want to bring us back to Neil's prediction that we kicked things off here with when we're thinking about what another big agent breach, in addition to what Anthropic has already kind of reported here. If we were to see a breach like the one Neil is talking about, what do you think would be the drivers? Is it an accident, a bad actor using agent ai? What do you think is a likely sort of scenario that we could potentially see?

0:19:43 Neil Thacker: It's a good question. I think we are going to see a mixture. I think we're going to see, in many cases, I think we're already seeing this feedback from, I mean, I'm having discussions with CISOs who are saying, oh, I didn't know we were utilizing this service. Oh, I didn't realize we were using this use case. I didn't realize this service was consuming this much data. So I think we've already seen some feedback from organizations who are realizing that perhaps they're seeing this already in terms of an accidental issue, security incident, has it been an accidental data breach as has data gone to the wrong third party? Or again, are there too many agents now interconnected that they've lost track of these services and how they're being utilized? I hear this question, comment commonly in discussions I'm having today is who owns my MCP server?

0:20:33 Neil Thacker: And so I think we're definitely going to see this issue occurring. It won't be written about to the same level as, again, external threat actors, but I think we're going to see an interesting mix of these. So yeah, perhaps, again, going back to the likelihood and impact types of discussions, perhaps there is going to be a slightly higher likelihood of an insider issue, but something that can be controlled, whereas from an external threat actor, it's still likely, but the impact could be huge because now we're within, with data that's been exfiltrated from the organization and potentially is now lost, has leaked, has now resulted in fines, penalties, sanctions, et cetera on the organization, or again, general reputational damage and this challenge for organizations to have to recover. So yeah, I think there's going to be a mix moving forward. But yeah, it's something we have to watch for. We have to look at the metrics in terms of how we're going to see this moving forward. I think we've always needed to look at countering any threat with a series of, again, controls, recommendations, et cetera, that we can implement. And I think for us, as for any organization out there today, they need to be looking at both capabilities here, the insider and the external, external actor.

0:21:48 Ray Canzanese: Another angle that I talk to a lot of security teams about as well is these AI companies themselves. These are companies that are just working with massive amounts of data. Many of these companies are less than a year old, less than two years old. They are very young companies moving quickly in a new space. And they are nervous about those companies and what those companies are doing to safeguard all the data that they have. And so I hear a lot of people asking, well, how can I allow people to use service A, but only for these super narrow use cases? Or can I make sure that none of our trade secrets or customer data or source code makes it into that service? That's another concern. And another thing to keep an eye out for is when there's a breach of one of these AI companies, what's it going to look like? Is it going to be data theft? Is it going to be model poisoning? There's a lot of opportunities for different ways to do harm if you are breaching one of these companies that your actual targets are using.

0:23:02 Neil Thacker: And it could be, I mean, this could be the low hanging fruit, right? Exactly. To your point, Ray, we've seen this. I mean, I've looked into organizations that somebody at NECAP has wanted to use and we asked them questions and they look back and they're not frowning and cybersecurity. What's that? So yeah, absolutely, this is a potential concern. This is something every organization has to do. It's a great point. Looking at your vendors, looking at the providers of your services, how are they responding to these types of attacks on their organization, their infrastructure, their systems and services?

0:23:36 Ray Canzanese: And then even if you do decide that you trust them and you're going to use them, what are you doing to monitor that interaction just to make sure that you're verifying that nothing has gone wrong, that nothing is being used that way, that's inconsistent with how you intended for it to be used.

0:23:53 Max Havey: All of this feels like it's leading toward an evolving sort of conversation that security teams need to be having with their organization around these sorts of about broadly AI and especially ag agentic ai, as that becomes a bigger and more substantial part of the way that businesses are using ai. And with that in mind, you guys have spoken about it quite a bit here, but how does this sort of shift conversations that are already happening as it relates to AI in cybersecurity?

0:24:24 Neil Thacker: Yeah, definitely. We are talking about this, right? It's been the hot topic for the last few years, and we're now at this, as I said at the beginning, and this tipping point, inflection point where organizations now realizing, again, there are obviously mixed reports about organizations and not seeing immense value, but yet every organization is using some form of ai, even if it's again, gen AI capabilities and these kind of things. But most organizations are investing heavily in these services to better, again, automate many of their workflows and some of their business operations. So we are seeing that this is in organizations today, and I think, again, it's how organizations are now responding. We had the same with the cloud. We had the same with cloud. Cloud didn't necessarily creep up on organizations, but people were aware of it. But then there was this explosion. Everyone was now using SAS and public cloud services.

0:25:16 Neil Thacker: And I go back to, if I can go back to the nineties when we had the internet again, the same kind of thing. It kind of crept in slowly and all of a sudden everyone was utilizing services on the internet. So we are at that point in time now where again, I would say there's a high percentage of people in organizations using a form of ai. So yeah, it's how we start preparing for that. Going back to raise point, the discussions are now, do we have an application, an AI inventory of our AI services? How have we reviewed those services? How are we controlling? So for instance, we have a common one where we have an AI asset inventory and we mapped that to our data asset inventory, and we understand what data is going to that AI service, and if we see an anomaly there that we have to step in and block that anomaly. So that's kind of how I'm seeing this evolution happen in organizations today.

0:26:09 Ray Canzanese: And I'd say that a good sign that I've noticed recently is obviously we've been talking about this a lot for a few years, and I've had questions coming to me from security teams, and they'll ask me like, Hey, how do I secure my agentic AI deployments in Bedrock? And I'll say, oh, cool. Can you tell me more about how you're using it? And they're like, well, we're not using it yet. We're just thinking about that. That's obviously the direction that things are going. We're trying it out and we want to think about security, and I'm impressed. I'm happy. This is the way I hoped this would be going, is that we've been talking about it enough that people are aware of it, that they're asking these questions before they get too deep into this process so that they can go in a little bit more well prepared than perhaps they did for the initial AI chatbot craze where people just started using it overnight and you were playing cash up from the beginning.

0:27:10 Max Havey: Absolutely. And kind of coming to the end of our conversation here, I want to get a sense from you guys and what can organizations do to start containing these sorts of sprawling risks now to start having these conversations? What's a piece of advice you would give to organizations that are looking to better secure this looking ahead as will continue to be a conversation and a threat that they have to deal with?

0:27:34 Neil Thacker: So I mean, I think it starts off with understanding what you have. We always start that in that cybersecurity, it's what AI services you have and also how they're connected. So obviously with the types of attacks we've seen, in many cases, it can be agent to agent connectivity that causes this. So having a good understanding of what you're using, why you're using it, looking at identifying what the service is doing, so what looks like normal, and then adding some controls in there. So looking at isolating if you see an unusual activity, having the ability to, of course, also make sure you are separating your platforms and your services so that again, if an AI agent has access to some of your non-production environments, like going back to raise point for testing for experimentation, that it also doesn't have access to your production environment or can't escalate its privileges to that production environment.

0:28:27 Neil Thacker: So looking at ways of separations there. But I think what we'll see is 2026, we'll see a huge increase in the use of organizations bringing in things such as AI gateways to better identify the use of AI in their organization and how the services are being connected and putting in those guardrails and applying the appropriate levels of technical controls around this. This is what I'm part of my prediction was, yes, we are going to see this a significant breach that's going to make headline news and the response will be, right now it's a case of we have that justification to go and invest and bring in these capabilities to better defend our organization against this. So that's what I, I'm expecting to see around these new risks that we have to deal with, even though they may be based on the old form of risks and the old form of tactics and techniques, we have to respond, I guess, for we have to respond with technology to counter again, these growing fast autonomous directs as well.

0:29:27 Ray Canzanese: Yeah, I totally agree. Everything starts with visibility. When you start looking to see what you have visibility into, you often start finding out where you have blind spots in your visibility as well, and that gives you a good strategy for where you need to do more. And the thing to keep in mind as you are looking into a problem, this is that dangerous blend of things happening on endpoints, things happening in the cloud, human identities, non-human identities, all these things working together. And so you find out what you're using, start mapping that out and start looking to see where your blind spots are, where you can get MCP gateways, AI gateways, cloud logs, anything that's going to help you solve those visibility gaps.

0:30:16 Max Havey: I think that's a great place for us to stop, guys. And Neil, thank you so much for joining us here. You had a lot of great perspective to offer, especially as you were one of the impetus for this episode, predicting the first AI ag agentic, AI driven cyber data breach in 2026. So thank you so much for joining us.

0:30:35 Neil Thacker: Yeah, sure. No problem. Yeah, it was great to speak to you again, max and Ray, and I'm sure we'll be talking a lot more about this in 2026.

0:30:42 Max Havey: Also, almost certainly. And Ray, thank you so much as always for joining, bringing some great threat expertise. I know you've worked on many a threat report and written about many a threat, so you offered tons of great perspective here, and I'm sure you'll have more to say as things kind of develop here.

0:30:55 Ray Canzanese: Yeah, thanks for having me, max. Look forward to the next one.

0:30:58 Max Havey: Absolutely. And with that, you've been listening to the Security Visionaries podcast and I've been your host, max. If you've enjoyed this episode, share with a friend and be sure to subscribe to Security Visionaries on your favorite podcast platform, whether that's Apple Podcasts, Spotify, or our stiff new videos over on YouTube. There, you can check out our back catalog of episodes and check out new ones publishing every other week, hosted either by me or my wonderful co-hosts, Emily, wear Mouth and Bailey Pop. And with that, we will catch you on the next episode.

Subscribe to the future of security transformation

By submitting this form, you agree to our Terms of Use and acknowledge our Privacy Statement.