0:00:06 Max Havey: Hello and welcome to another edition of Security Visionaries, a podcast all about the world of cyber data and tech infrastructure, bringing together experts from around the world and across domains. I'm your host, Max Havey, and today we're digging into the world of human-centered cybersecurity and risk. And our guest today is Beth Miller Field CISO from Mimecast, who literally wrote the book on risk reframing. So Beth, thank you so much for joining us here today.
0:00:30 Beth Miller: Thank you so much for having me today here, Max. I appreciate it.
0:00:34 Max Havey: Absolutely. Well, so just to dive right in here, in case folks aren't really familiar with the term, can you take us through what a human-centered approach to cybersecurity is and what that sort of looks like?
0:00:46 Beth Miller: Yeah, for sure. So I think when we're talking about human-centered, we really have to start at the definition. So risk to me is human by definition. It's not a system failure, it's not a compliance gap. It's a person actually navigating uncertainty without necessarily having the right tools. So the old questions, how do we stop mistakes? Those are replaced with new questions around how do we help people succeed really under pressure and under constant change. So the shift is that users or employees, we're not looking at them as variables to be controlled, but really we're looking at them as individual navigators that need to be equipped. So everything else follows when you shift that perspective. And really something to support that is that we know that 95% of breaches involves a human error, but only 8% of employees actually cause 80% of the incidents. So there's that gap there between this human-centered security thinking of it being soft or less than when it's actually smart security.
0:01:53 Max Havey: Why is fear kind of a thing that is driving a lot of these trainings and driving folks, driving the way folks approach security? Why is fear such a driver in all of that?
0:02:03 Beth Miller: Fear is efficient. So it grabs attention, it creates compliance, it feels measurable. So the industry, the security personnel kind of default to it, but the neuroscience is very clear that threat response degrades decision-making and fear builds shame. And shame makes people hide mistakes rather than report them. So really we've optimized for awareness but not for behavioral change, and those are not the same things. So fear gets attention, it doesn't build judgment, and then when people are afraid of getting in trouble, they stop telling you something when things go wrong, and that's the opposite of what we need. We've been running these fire drills and calling it fire safety.
0:02:49 Max Havey: It's this notion of people are more afraid of admitting that they're wrong than saying they're wrong and fixing the problem. And that's such an interesting way to approach to think about cybersecurity and risk that way.
0:03:00 Beth Miller: There's a lot built around this idea of controls. You have system, you have data, you have identities, but really the overlaying arc of this is that humans, right? So humans have identities. They access systems to interact with data, right? Humans are actually the beginning and the end link in this chain. And involving them in part of this process is key to ensuring better risk management overall across the systems and the data protection and the identity management.
0:03:33 Max Havey: It's changing as you talk about a human-centered approach. It's sort of changing the mindset that you're approaching security with the people that are in your company changing. It's taking away from being this fear-based afraid to make a mistake to it's okay to make mistakes, but it's better to learn from those mistakes and to move forward. And that kind of raises all ships.
0:03:56 Beth Miller: And I think what is different about human risk management than maybe some of the other risk disciplines within an organization is that it really requires this multidisciplinary approach. So I think about when you're building a human risk management program, there's really five personas that are involved with it. So you have security, you have IT, you have HR, legal security, and then you have the managers or employees themselves. So fundamentally you are creating cultural change in alignment with business objectives. So making the employees part of that solution is really just good business in addition to ensuring that this next vector, the human attack surface is really managed in a way that can also be measured and matured as well.
0:04:46 Max Havey: Well, it's especially interesting too because our most recent guest that we just had on, we had Jenny Radcliffe, the people hacker on and talking about social engineering and the sorts of things that go into that. And from her perspective and a lot of psychologically reading people and the human attack surface is not a thing. I feel like a lot of people, especially at least in the security circles I've talked in, I don't feel like I hear a lot of people talking necessarily about the human attack surface. So I think that's quite interesting.
0:05:16 Beth Miller: It's the next evolution of where security has been headed. So again, if I can double down the 90s was about wrapping your head around all the data that was in your environment. 2000s was about the systems and the migration from on-prem to in the cloud, and then you had identities, which was kind of early 2010, of being able to track human, non-human secrets across your organization. And so we have evolved into now humans and identifying that that is the connective tissue now, just because we're at the point where we're acknowledging humans and really AI in next chapter here, but this is where we're at right now. It doesn't undo the connective tissue across the layers. You had something Max, what were you going to say?
0:06:03 Max Havey: Yeah, well I was going to ask specifically, when you're thinking about this sort of reframing risk around the human side of things, how is AI changing that? I mean, I feel like every episode we need to ask the question, how is AI impacting this? Because that is such a ubiquitous thing happening regardless of industry right now. So I'm kind of curious to get your thoughts. Where does AI sort of fit into this conversation?
0:06:28 Beth Miller: A couple of just ways to think about it is it traditionally we've been talking about perimeter defense users as liabilities, so employees as liabilities and security as a tax on productivity. But AI makes compliance framework thinking permanently obsolete. You can't write a policy for every scenario that AI will generate, it just won't scale. So AI amplifies the need for human judgment. It doesn't replace it. The human is now supervising AI outputs more skill is required, not less. So moving into that, we know 95% of organizations use AI to defend, but only 55% admit that they're not fully prepared for AI driven attacks. So we've automated the defense layer while leaving the human layer on a 2005 basically error compliance training.
0:07:24 Max Havey: I mean, that's kind of the exciting thing there where I feel like a lot of conversation around AI is a lot of like, oh, how are people going to interact with this? And I think emphasizing that the human level of judgment is the key thing to making AI work. It's not always just having the right problems, but having the right people behind that who understand how to not do this in a way that's going to put everything in jeopardy.
0:07:47 Beth Miller: I think the bottom line is really that AI is making the threats smarter, and we can't keep responding by keeping our humans dumb. So they have to understand the part of this process that they can affect and that is incumbent upon the organization because it's good business. The AI era doesn't eliminate the human element, just like you said, Max, it really amplifies it. And so when you can't write a policy for every AI scenario, you need people who can think through how to respond.
0:08:21 Max Havey: 100%. Well, and so with that in mind, how should security leaders be going about reframing their brain around these more human-centered approaches? Especially as things are consistently evolving, what are some ways that you would sort of suggest that they start thinking to reframe that mindset?
0:08:38 Beth Miller: So I think the definition of risk really needs a reset as well. So when you look it up in the dictionary, it tends to really lean towards doom and gloom, but the actual root of it is Italian, and when you dig in, it means to dare. And so somewhere along the way we kind of lost that ability instead of to avoid anything that could go wrong, it was this to dare. And they really had this saying, it was around navigation, so it'll come to me in a second, but nothing ventured, nothing gained, basically.
0:09:10 Max Havey: Yeah, absolutely.
0:09:11 Beth Miller: And we really think about it somehow it ended up in compliance and security and we rewrote the definition by our standards obviously. So we have to really start at fundamentally redefining what that word means and looking at navigation signals. So not just warning light. In the book that I wrote, I talk about this FOLD framework and it's this idea of finding the current, what risk mindset are you currently in? How do you orient yourself to the context? And then how do you learn what's driving it and then decide deliberately. So an example of that is instead of don't click phishing attacks, so don't click. Here's how to recognize pressure tactics and what to do when you feel rushed. So now you're giving them a mental mode and not just a rule, do this, don't do this. A binary choice. So rules tell people what not to do and reframing really teaches them how to think. So just the bottom line here is that I want to reiterate that risk is not the enemy. So risk doesn't equal bad, it's unexamined risk. That really is the problem here.
0:10:18 Max Havey: Absolutely. And that's driving a genuine mindset shift around the way a lot of people think about risk. And I love the notion of risk is not trying not to do the bad thing, but to dare to do the thing. And I thinks again, it's changing the way you think about it. And that's exciting to think about it. We often, in a past life on this show, we talked about the idea of security as a team sport where it's like everybody's all in. And I think those kind of go hand in hand in that same sort of way where it's like you can tell people every day, don't click on the link, don't click on the link. And no doubt it's always an eventuality. So I think it's more giving people all the tactics they need to actually put these things into practice.
0:11:05 Beth Miller: Oh my gosh, Max, you just set me up. You didn't even know it, but I appreciate it. So I actually in the book, use the acronym TEAAM. So when I was referring to the currents, so a lot of security, if we're honest, all human beings, we get set in our ways. This is not a personality discussion, this is just like a preference. I prefer chocolate ice cream. Why? I don't know. It's a preference. So if you think about risk and having maybe five dominant preferences, I break out in acronyms. So the acronym is TEAAM. So anytime you feel stuck that the decision is really heavy, you want to consult your team. And so the team stands for T transfer E, exploit A, accept A avoid two, and then M mitigate. So a lot of security professionals will get set in the mindset that risk equals mitigate risk management equals mitigate when actually you have four other currents at your disposal. And so being mindful of where your preferences are and then where your team's preferences are and then where your organization's preferences are really goes a long way to being able to have that conversation across individuals and then across individuals within our organizations to be able to say, Hey, are we actually maximizing all of the resources at our disposal? If we switch from thinking about mitigating to transfer, what does that get us.
0:12:31 Max Havey: 100% And kind of reframes the whole notion of a risk appetite. I feel like I hear that word thrown around a lot, but when in reality how
0:12:37 Beth Miller: Hungry are you? I don't know exactly,
0:12:39 Max Havey: Exactly
0:12:40 Beth Miller: How much ice cream can you eat versus me, I don't know. Right,
0:12:43 Max Havey: Exactly.
0:12:44 Beth Miller: Hard to quantify. Right.
0:12:46 Max Havey: Well, and it's interesting as you frame it as these different channels of like, oh, well mitigation the one that is immediately going to pop up in brain is the first one, but there are all these different other places you can sort of take that. And that's extremely interesting. And I like thinking about it that way. And it is far less binary. It's moving away from that sort of broader binary thinking around risk.
0:13:08 Beth Miller: Yeah, thank you for that. I do think one of the problems, and this is a question you had posed in advance, so I know you're going to get to it, but I'll just tease it out here, that language is very important, right? Because you know that there's organizations may accurately claim that they have great security culture, but truthfully, each division within organizations can have its own culture as well. So in each culture, what is the prominent language? What are the acronyms? What are the prioritizations? So as a security professional, you almost have to be this excited tourist that every time they go out and venture beyond the security boundaries, they're thinking about, well, what can I bring back or what can I share? That type of stuff. And having grown up traveling a lot, my father was military, knowing that you could pick up another language or use a gesture that would soften the transfer of information, that was always significant.
0:14:10 Max Havey: What's interesting in this conversation too is this, it almost feels like a sociological discussion of cybersecurity and security and risk. I found myself going like, oh yeah, I know you got subcultures of cybersecurity subcultures depending on the business student you're working with. And I think that's so interesting and it sort of brings to mind, you sort of talk about the idea of traveling and everybody's sort of having a different approach. It's sort of the interdisciplinary side of cybersecurity. I feel like a lot of security leaders I've talked to, it's the notion of you can train just about anybody to be in cybersecurity and think like that, but it's bringing in those other angles and kind of understanding how you can apply some of those different skills that really makes a good analyst or someone as part of a cybersecurity team.
0:14:57 Beth Miller: Yeah, I was just talking to a colleague of mine in security and he's brilliant. Matt Huber is his name, and he talks about the old way of thinking security as being the department of no. And he's actually emphasized that it's actually the department of how. So he wants to go about and explain, tell me what you want to do and then we can figure out how to do it. And I think again, that has this steward-like mentality. It has this navigator aspect to it, and really it creates this whole sense of journey. Let's do this together. And I think that is really more or less the mantra around human risk management is think about the possibilities of what we can amplify together.
0:15:38 Max Havey: And just kind of zooming out on this, yeah, zooming out on this a little bit. As a CISO yourself, how have you been able to apply these sorts of strategies in your own work as a CISO in things? Have you seen successes from this? Can you tell us a little bit about that?
0:15:53 Beth Miller: So first my role is as a Field CISO, so I am much more customer facing than internal facing, but I just want to clarify that because I respect Leslie Nielsen and his team and the work that they do internally. So my role is really to take what they do, synthesize it to our customers and then what our customers are doing and take it back into product so that we are making sure that we are in alignment from an organizational perspective, from a product perspective, and again, making sure that we are delivering the services and products that we say we do. But I would say how have I applied this specifically with my teams? Because I've had a bunch of different roles prior to being at Mimecast. I think that the big shift around stopping to measuring compliance. So yes, of course NIST is very important. Yes, you have to have an audit capability, but I think this conversation about language. So having this audit conversation with language and not necessarily this audit conversation with compliance is really important. So yes, we count how many times there's threats and failures and mistakes appear. Obviously they show up in briefs that go all the way up to the board. But really what you're looking for is this transition to how many times are you helping navigate or equipped or changing the current. So what is the ratio that's taking place in your culture and what do you want it to be? So replacing that, who is responsible for this? Again, that cross multidisciplinary functionality with what current were we in when an event happened, anything that was negative to the organization and what navigation tools we're specifically missing. So what resources do we need outside of just more systems, more identity controls, more access controls and less data that we have to protect. But changing your metrics from this click rate to report rate. So see something, say something as a simple explanation, but what you measure tells the organization what you value. So if I had to start the one audit question would be that I challenge my team and have in the past to make sure that we're capturing is when someone in my organization makes a security mistake, what happens next? So that answer will tell you everything that you need to know about your culture.
0:18:18 Max Havey: Absolutely. Well, I feel like there's often sort of the what is happening right now and not necessarily the what happens next after we've mitigated all of this stuff, we've remediated all of this. What are we actually going to do to change this? What have we learned here and how are we going to apply that.
0:18:33 Beth Miller: 100% I want my team's first instinct to be let's figure this out together, not let's figure out who's to blame, right? So when you name the current you're in, you kind of stop being swept away by it. It gives you that pause and it gives you that permission to really explore what other options could have been at play and then that allows you to do better next time.
0:18:55 Max Havey: Why do you think this sort of human-centered approach is sort of beneficial broadly to the cybersecurity community at large? This is a significant mindset shift. So why is something like this beneficial for an industry like this?
0:19:08 Beth Miller: I'm going to make some bold statements here, but burnout is at a crisis level. So fear-based cultures eat their own people. Our adversaries already have PhDs in human psychology, so we're fighting manipulation with awareness campaigns and that's just a mismatch security that works with human nature scales training that fights it needs constant refreshing security transformation from the department of no, which I said to the architecture of how it's also a retention strategy. So security professionals who feel like they're building navigators will stay longer and burn out less.
0:19:50 Max Havey: It creates a good security culture. We're talking about good security hygiene, but that is a good security culture that not prioritizing fear, not burning people out, but giving them the skills to actually be good stewards of security within their own business unit. Whether that is someone who's on a security team or someone who's not necessarily part of the security team but is a security focused person on a marketing team or on a creative team or something. All those, I think one of our past guests talked about the notion of an ambassador program and that being a way to sort of seed some of these new ideas and innovation within an organization like that. And it's very much leading by the people who know what they're doing and can answer those sorts of questions that you're talking about.
0:20:35 Beth Miller: Absolutely. I would just close that out with maybe two different thoughts. So you mentioned good and I agree with good, but what word I would replace is resilience. So when we talk about where we're at and where we want to be, and we think if you're still measuring with compliance, then compliance is the floor. And really resilience is a ceiling, right? Or doesn't even have a ceiling, but it's what you're driving towards. So we've been celebrating the floor when we should be focused on rising to the ceiling and then a security culture, like you said, that burns people out. It doesn't make anyone safer. And we really need right now to be more safe than ever.
0:21:13 Max Havey: There's nothing that actually enables folks more than a good security strategy that is enabling folks rather than holding them back, causing more friction. It's supposed to be, I think I said it early on, but that rising tide that's raising everybody up as a result.
0:21:29 Beth Miller: Agreed.
0:21:31 Max Havey: Well, Beth, coming to the end of my questions here, what's one tip that you think every security leader should keep in mind when it comes to a human-centered approach like this?
0:21:41 Beth Miller: Yeah, so I think I said it before, but I'm just going to double click on it because I think it's worth saying particularly as we wrap up and just to leave folks with, I think that auditing the emotional experience of your security program is really something that needs to happen almost before you audit the controls. So by asking someone in my organization, when a mistake happens, what happens next? That answer tells you everything about what needs to be prioritized in the organization in order to create that navigator mentality, ambassador mentality, right? Or that team approach. The fastest single change you can make costs nothing. So it's safer to report a mistake than it is to hide from it. And making an environment where people feel safe to do the reporting is probably the single thing that I would harp on. But at the end of the day, just going back to the beginning, I think that risk is human. We are born innate, being able to take risks, but navigating risk is actually learned. And so organizations need to start thinking about how to teach that navigation, not just because it benefits every employee, but because it's a good business decision at the end of the day.
0:22:56 Max Havey: Absolutely. That feels a good call to action to end on here is daring to navigate the seas of risk with your team in tow with you. I think that's a lovely way for us to land here.
0:23:07 Beth Miller: Max, I really appreciate your time here today. This was such a pleasure. Yeah,
0:23:11 Max Havey: Absolutely. Thank you so much for coming on, Beth. It, it's been such a joy and I hope our listeners enjoy this conversation as much as I did.
0:23:19 Beth Miller: Thank you. Me too.
0:23:21 Max Havey: And with that, you've been listening to the Security Visionaries podcast and I've been your host Max Havey. If you like today's episode, be sure to share with a friend and follow security visionaries on whatever podcasting platform you love, whether that's Apple, Spotify, or YouTube. There you can find our back catalog of episodes. And while you're there, please give us a follow or review every little bit helps. keep an eye out for episodes dropping every other week, hosted either by me or my co-host, Emily, wear Mouth or Bailey Pop. And with that, we'll catch you on the next episode.