0:00:05 Emily Wearmouth: Hello and welcome to the Security Visionaries podcast, a place where we bring experts together to talk about cyber data, security, infrastructure, all sorts of good stuff. I'm one of your hosts, Emily Wearmouth. And today I'm hoping to start a little bit of a fight. Let me explain myself. I've been thinking recently about how CIOs and CISOs seem to have an almost conflicting mandate around AI when they're speaking to their CEOs and being given their tasks for the day. So you've got the CIO who's being specifically asked, it seems to me, to run around the corridors of an organization disrupting things, finding efficiencies, finding opportunities for productivity, finding new revenue streams, but literally being asked to go and take things apart and rebuild them with AI as part of that process, which sounds great. But what happens when walking those same corridors is a CISO who's mandate from the CEO is to defend the organization, particularly around emerging threats that might pertain to AI. So that's what I want us to talk about on the podcast today, and I've got two excellent guests who hopefully will help us navigate through this little spat. I'm going to start by introducing our first guest who is representing the CIO corner of the boxing ring, Mike Anderson. So Mike, welcome to the podcast.
0:01:19 Mike Anderson: It's great to be here. Looking forward to the conversation. Hopefully we can make it a fun spar.
0:01:24 Emily Wearmouth: Yeah, hopefully. So Mike is the Chief Data and Information Officer at Netskope, the security and networking company I'll put my readers on. He joined Netskope from Schneider Electric and has numerous roles on advisory boards as well. When I'm looking at guests for the podcast, I try and look right back into the depths of their CV and see what they did way back early on in their career to try and get a bit of a feel for how they might come at the conversation. And Mike, I spotted very early on, I can see the slight beads of sweat and fear as to what's early on on your CV. Mike, I spotted that very early on you did a lot of work with application service providers on data integration. So I'm thinking you're probably coming at this, the idea of data integrations and AI with no fear at all. Is that fair, Mike?
0:02:08 Mike Anderson: Yeah, no fear at all except fear itself.
0:02:13 Emily Wearmouth: Well, welcome to the podcast. Let me introduce our other guest because I thought while it would be great to get a CIO and any random CISO, it would be even more fun if it was the CISO who actually works with Mike Day to day because they won't be talking theoretically and they might come with a bit of beef already in the fight. So our other guest is James Robinson, who is the CISO at Netskope. Welcome to the podcast, James.
0:02:36 James Robinson: Yeah, I'm here to defend.
0:02:41 Emily Wearmouth: So James's CV is all about data applications and cloud security. He started off in sort of tech analysts and systems engineer roles and I did spot James very early on in your career that you worked at a major brewers. So perhaps we can have an offline conversation about some of the perks of that job
0:02:59 James Robinson: Anytime. Only if we can do it over a drink.
0:03:03 Emily Wearmouth: Awesome. Alright, shall we dive in? So I wanted to start with a question to you Mike. When you are looking, you are roaming these corridors and you're looking at the big picture of an organization, how are you determining what you and your team should be spending your time on making sure you're not just tickling around the edges and really finding the AI projects that have the greatest potential impact for an organization? Where do you go looking?
0:03:29 Mike Anderson: Well, the first one is where we have a really well defined process because if you throw a technology, it doesn't matter what it is, at a bad process, it just gets you to the same outcome faster. So the first thing is make sure we have a sound process. And one of the things I always say and tell the teams is make sure we follow the revenue. How are we making money as an organization and where is their friction in that process? And then how can we use technology to help reduce that friction? So when you think about AI, it's no different than robotic process automation or any other kind of tool that we've seen in this space over the past 20 years. It's just really how do we make sure we apply that appropriately? And AI is not a one size fits all approach. There may not be a good use case for AI in specific areas and it may be AI along with other tools in the tool belt as well.
0:04:20 Emily Wearmouth: And James, when you are starting or perhaps just sort of standing and watching Mike's team create chaos, how do you go about getting your arms around AI? What is your starting point?
0:04:31 James Robinson: Yeah, I think one of the things that, to go contrary, I would say to this topic, Mike and I actually found some common ground in literacy. And so one of the things that we started to identify was people didn't know what their boundaries were, how much should they share, what's approved, what's not approved, and those types of things. And so doing things like Promptathon, I think, was a really good example where I should have wore that shirt instead of this one. But
0:05:00 Emily Wearmouth: Explain to us what do you mean by prompter on? I can see for the purposes of the listener, you're wearing a Hackathon, but you just said Promptathon. What's that?
0:05:09 James Robinson: So it was the idea that we would try to get some people to get some ideas flowing, use some approved AI tools for us, it was Gemini internally, use that and then start to create ideas and different things that they could do. It also very clearly started to outline some of the boundaries that people could go down or shouldn't go down for us to be able to use Netskope data within and so on and so forth. So that elevated everyone's knowledge level because they were doing projects, they were using AI on a daily basis and using it to have some fun. And so it was an education awareness item. That right there kind of started to set the mark of not only pushing the education and pushing the security education knowledge, but then also just a general education on the space. We're a technology company, so for us there was a lot of different projects around AI and genAI, but then with that it kind of focused it in and also set that mark of what is appropriate, what's not appropriate. That's one of the areas. The other area that I start to look at is just that attack surface. I think the space has defined very, very well what attack surface management is as it relates to networks and systems and things that may be exposed. But attack surface can also come in shadow AI and it can come from even the use here of a recording application for us to do this podcast. What happens if that now has AI embedded in it? No one really knew. We didn't review it, we didn't run it through security, throw the brakes on it, slow the projects down in the traditional security sense, but really just make sure that the AI is safe and it's not learning from the data that we're uploading in because we don't want to take that scope internal data or if we have a use case where we're trying to do something and build a product for a customer data or something like that and we're starting to teach the ai, that's where we really find ourselves into difficult conversations in difficult places. The use case and the benefit, even well-documented, may be there, but if we don't know if they're using the information that we're uploading, if they don't know that we're using that and teaching their AI, well that's a problem. And so that's kind of where we have to slam the brakes on. That's where KNOW the know becomes no, we cannot do that and set that line. But again, kind of going back raising the bar with education, so we kind of have that common ground and surface that we can all talk through. And then two, start to look at what are those shadow AI areas that we need to jump on and identify those and then pivot from there.
0:07:41 Mike Anderson: I know we have enough video recorded of you just now, James to create a deep fake of you, so I appreciate that. So that, speaking of where we're going to upload this video just saying I approve, I approve, I approve. We got it down.
0:07:56 Emily Wearmouth: So okay, maybe I should have got someone else onto the podcast, the sort of anonymized user because it sounds like you two with your promptathon are marching more in step than I'd perhaps set up at the beginning of this conversation and that maybe it's those shadow use cases emerging from random business units that might be more of the issue here and less of that centralized organizational approach into AI.
0:08:20 James Robinson: I'll tell you where the conflict comes in is just that ability to go as fast from a security review perspective. Mike and I have conversations all the time around this got hung up or this isn't moving fast enough or this use case is kind of just sitting and it's been sitting for two weeks. Can you get the team to review it? And I think that's where kind of conflict starts to get rooted for most teams. Obviously the mandate to get into more efficiency and operationalize AI and do all these things, we want that to happen as well on the security side, but we're trying to leverage our processes and procedures and sometimes honestly they're just not fast enough for the business. And so that's one of the things that as a leader, I'm trying to always look and try to figure out even can we use AI ourselves to be able to speed that up? Why not use the efficiencies of AI to be able to speed things up ourself? And so we are trying to do the same, but we're just not operating as fast as I'd say the business wants us to. And that's where we start to get into conflict would be in those scenarios or we're not seeing eye to eye, and it's," not to speak for Mike, but it'd be something like, Hey, this is a simple easy use case. I don't understand why it's hanging up or why it's so slow." Well, it's not that one is hanging up or slow. It's that we have a flood of 'em coming in from multiple business units and teams on the backlog is big. Over 1,200 assessments