0:00:01.4 Emily Wearmouth: Hello and welcome to the Security Visionaries Podcast. Today I'm joined by a guest that I've wanted to get on for a little while now, and not just because he has a great job title, but he does have a great job title. Mark Day is a Chief Scientist and he does that for cybersecurity and networking company Netskope. Welcome to the show, Mark.
0:00:20.1 Mark Day: Thank you, Emily. It's nice to be here.
0:00:22.6 Emily Wearmouth: Now, before we dive into the topic that I plan to quiz you on a little today, could you just tell us a little bit about yourself and your background?
0:00:29.6 Mark Day: Sure. As you said, I'm Chief Scientist at Netskope. I did my graduate work at MIT where I have a PhD. I'm an expert in distributed systems and I've done a lot of different distributed systems work through the years at a variety of tech companies, Netskope being the current one.
0:00:52.4 Emily Wearmouth: So help me understand, when I think of science, my brain immediately goes to pipettes and test tubes. But help me understand what the chief scientist role is in a tech company, in particular how do you work alongside a chief engineer, a chief product officer, a chief technology officer? Where do you fit in that mix?
0:01:09.1 Mark Day: Well, my role here is largely one of asking good questions and of then recognizing when I'm getting good answers, or maybe not so good answers and pushing for better ones when I need to. And so a particular focus of mine has been on the infrastructure, the plumbing, if you like. And so I know enough to be dangerous about all the different kinds of technologies that go into building the Netskope solution. And so I'm well positioned when things go wrong or when we need to be moving in a different direction to both understand what the issues are and to maybe find the people who can best accomplish that within the organization. So it's partly a question of being a catcher of lots of strange new technical things that might be materializing that aren't someone else's portfolio yet, and also being prepared to go and ask questions and not being too worried if I'm asking stupid questions. So that's where the PhD is helpful, that I can say if I don't understand something, probably it is genuinely hard or genuinely new.
0:02:25.1 Emily Wearmouth: That's a really clear way of... I really want your job, Mark. I'm just going to put it out there. I'll take myself off to MIT. I'm after your job.
[laughter]
0:02:33.1 Emily Wearmouth: Now, at this point I am going to put out a little content warning. If you didn't spot it in the show title, this episode will be using a swear word. It's a mild one, but if you are protective over sensitive little ears that may be hanging around your feet as you listen and prepare the dinner, here is your warning that we will be using in full the word that is often shortened to BS. And there is a reason why, I promise. But maybe usher some folks out of the kitchen. Okay, warning duly delivered. Mark, a few months ago you told me that large language models were bullshit artists and that despite that, you believe that they can still be incredibly valuable, which seems hugely contradictory. So start us off with why do you believe that LLMs or AI models are bullshitters?
0:03:18.2 Mark Day: Right. Well, this is not an original insight. All that I've really done is to pick up and embrace a published article that made the point... The title of the article was ChatGPT Is Bullshit. And the point there is that there is a technical notion of bullshit in moral philosophy where it's basically being used to describe the behavior in which you are saying whatever is necessary to make your case without regard to whether it's true or not. And this is a thing that we can identify people doing. And, you know, sometimes it's okay and sometimes it's not. But it's a characteristic of these large language models that are getting a lot of attention in the AI space, because fundamentally the way that a large language model works is that it constructs things that are plausible and it constructs things that sound good based on what it's been trained, but it doesn't have any concept of what's true and what's untrue. So in some quasi definitional sense, large language models are intrinsically bullshitters. They don't know how to do anything different from that.
0:04:33.0 Emily Wearmouth: Is this why we get issues like hallucinations within AI systems?
0:04:37.8 Mark Day: Yes, hallucination is the polite way of referring to bullshit.
[laughter]
0:04:42.2 Mark Day: And so I think that one of the services of that paper was to point out that when AI researchers were talking about the fact that their models sometimes hallucinated or that there was an issue with hallucinations and that further research needed to be done on it, that what they were actually sort of stepping their way around was the fact that their models were bullshitters. And as I said, I think that although there are certainly disagreements about this within the AI community, I think that there are good reasons for thinking that given the way that large language models work, which is effectively that they are statistically producing likely sentences as opposed to making reference to some sort of body of knowledge, I think that they are intrinsically bullshitters. And the thing that is striking is how often they manage to tell something that happens to be true as opposed to the fact that occasionally we notice that it's bullshit.
0:05:41.5 Emily Wearmouth: That's fascinating. So, okay, I'm not a fan of untruths, so convince me, how can something that is intrinsically disregarding concepts of truth, how can that have value? We require a trust with systems that we use. So how can we build up a trust with these systems that we know are lying to us?
0:06:00.0 Mark Day: Well, the challenge here is not necessarily that they are seeking to lie to us or that they lie 100% of the time. I think another useful metaphor that some people use for generative AI for large language models is intern as a service. And so that captures the idea that it's very much like you just hired an intern and they know a lot of things, but they don't really understand how your organization works or what matters to you or how certain aspects of the world are put together. And so they can make some very silly mistakes. And I think that there's a similar quality to AI that you can go and ask it a certain kind of question and it will come back with a fabulous answer right away. And you'll say, I don't know how I lived without this. And then you can ask it a similar thing and it comes up with something that you just know is absolutely wrong. And you go, well, that was terrible. And other than knowing what you are talking about, I don't know that you have a way of protecting yourself against that.
0:07:11.7 Emily Wearmouth: Our listeners tend to be leaders within the security, the data fields, technology. What thought process would you advise people put their ideas through when they're thinking about the tasks that they would give to these sorts of systems?
0:07:24.9 Mark Day: You want to understand the conventional wisdom on a particular subject with which you're not familiar. That's a great task for an AI. If you want to have it summarize a large corpus of stuff that you don't have time to read and you're okay with maybe there being minor glitches along the way, that's another great task. I think that if you turn to an AI and you say, tell me what's going to be the market leading product in five years, that's preposterous, right? You're just not going to get that kind of information out of it. And I think even asking questions, you know, I was recently asking a fairly specific technical question. I was looking for scenarios to match a problem I was trying to explain to a customer and I specified the scenario quite carefully and the AI kept giving me answers that I thought I had ruled out. And it was one of those areas where if I hadn't already known quite a bit about that space, I would have been tempted to just copy paste what it said and put it on my slides and then present garbage to the customer.
0:08:40.4 Emily Wearmouth: So there's a lot of AI within Netskope's systems. What sort of things are you or the Netskope team getting AI to do? And why are you comfortable with it doing those things?
0:08:51.2 Mark Day: What matters here is to distinguish between different flavors or different genres of AI. The large language models, the ChatGPT type things get all the attention because it's very exciting to be interacting with a human seeming entity, asking it questions and so on. And almost none of the AI that adds value in the Netskope platform is of that type. What is much more common in the ways that we're using AI is still machine learning models. They're not large language models and they are doing tasks that, when humans do them, require some degree of intelligence. And so, for example, there is a model in Netskope that allows us to be able to identify things that are very likely to be passports or are very likely to be driver's licenses. And that's the sort of thing that you can accomplish by getting a large corpus of passports images and driver's license images and effectively teaching the machine to recognize that sort of thing. And it does that task about as well as a person does. But that doesn't in any sense relate to this whole sort of, you know, I should be concerned about the answer that I'm getting from this AI because maybe it's bullshitting me. So that's one reason why, broadly speaking, we've got quite a bit of AI in the Netskope platform, but I'm not very concerned about the bullshit problem there.
0:10:31.4 Emily Wearmouth: So Gen AI was the hot favorite possibly last year, and this year we've all gone off it. We're all flocking to see a new band in town, Agentic AI. And so I wanted to ask you, last time we spoke about this, agentic AI wasn't really on the radar. How might we need to consider the way we use agentic AI if we agree that they're bullshitting us too? Is it different or does the same caution apply? There's a big difference between generative and agentic. So how does this bullshit theory apply?
0:11:01.0 Mark Day: I think, first of all, I personally think the jury is still out as to whether agentic is meaningfully different from Gen AI. I am more inclined to the theory that says that Gen AI actually tells us something interesting about the nature of language and the world and the fact that these relatively simple seeming processes can do extremely non trivial things. Agentic just to me seems to mix some ideas of workflow and interacting into the mix. I don't know that I have the same sense that it is a new world. I suppose I may yet be convinced, but at this point I'm more like, okay, so AI, having had a big flashy moment, is now sort of back to the usual mode in which there's a bunch of people running around saying hypey things but not really having much to show for it.
0:11:58.5 Emily Wearmouth: So your take would be it's just a new marketing way to get everybody excited about something new again? [laughter]
0:12:04.0 Mark Day: Yeah, I think at the moment the jury is still out. I'm open to being convinced, but I'm not yet convinced.
0:12:09.9 Emily Wearmouth: Interesting. Now, because of your awesome job title, allow me to ask, I have a brief dalliance in the world of sci-fi. What are you most excited about, ideally in the realm of artificial intelligence, but maybe not, that is yet to come?
0:12:26.2 Mark Day: I think that the thing that would most excite me that I haven't yet seen and that I think we will see at some point, though possibly not in my lifetime, is some form of new art that is recognizably rooted in AI. And the analogy that I think I would draw here is to the way that movies worked. And movies were initially a novelty and people would then use them for little snippets of stuff. And then people used them in sort of an imitative mode where they would film plays or early movies are very much influenced by what was happening in the theater. And then there's a point at which their own vocabulary develops and subsequently that there are cinematic pieces of work that are recognizably masterpieces in one dimension or another. And I think that we're kind of at the stage of it's a novelty and people are using it to goof around. And you could say that some of the things that people do with current AI art, where you're generating some picture that you like, that's very much in the imitative mode, we're not yet able to conceptualize even what the equivalent of Citizen Kane would be. We certainly are not yet at a point where we can imagine a work like Christian Marclay's The Clock. But it seems to me as though the interaction of people with AI will eventually lead to works of art that are in some sense analogous.
0:14:09.7 Emily Wearmouth: So when I was at uni, I lived with someone that was doing an art history degree, and I remember her toiling over lots of essays around what is the meaning of art? And is this item art? Is this item art? What's art? What's craft? You're going to open a whole new sort of course that that poor girl is going to have to do next time around. Cannot be created by anything other than a human. [chuckle]
0:14:29.6 Mark Day: Oh, absolutely. Absolutely. Well, I mean, I think that the interesting question there is not so much, you know, can a machine create art? But I think that some machine human combination is going to do something very interesting.
0:14:42.5 Emily Wearmouth: Well, I know my son is very excited about AI in the future, enabling him to be Harry Potter without having to go through auditions or have his life ruined by fame. So we could all look forward to that. [laughter]
0:14:52.7 Mark Day: There you go. Exactly.
0:14:54.9 Emily Wearmouth: Brilliant. Well, is there anything else, Mark, that you wanted to leave our listeners with when they're thinking about AI in the coming months?
0:15:01.5 Mark Day: I think that the single word, bullshit, is probably the most important thing to leave them with.
0:15:06.7 Emily Wearmouth: [laughter] We didn't want to get to the end of the podcast and not swear one last time. Thank you very much.
0:15:11.5 Mark Day: There you go.
0:15:12.0 Emily Wearmouth: Well, thank you, actually, for making it our most sweary episode ever. I think we carried it off with erudite aplomb and a lot of gravitas. So thank you very much.
0:15:21.2 Mark Day: Thank you, Emily.
0:15:22.8 Emily Wearmouth: You have been listening to the Security Visionaries Podcast and I've been your host, Emily Wearmouth. If you enjoyed this episode, please do share it, but also make sure to follow us on your favorite podcast platform so that you never miss an episode in future. We'll catch you next time.