Max Havey: Hello, and welcome to another edition of Security Visionaries, a podcast all about the world of cyber, data, and tech infrastructure, bringing together experts from around the world and across domains. I'm your host Max Havey, Senior Content Specialist at Netskope, and today we're talking about analyst research. Now you've almost certainly heard the names of many different analyst firms thrown around as the research goes out throughout the year. Things like the Gartner Magic Quadrant, the Forrester Wave, the IDC MarketScape, all of these are illustrating where vendors stand when it comes to key capabilities that consumers are looking for. With that in mind, I've brought in some guests who have experience in this world, and the Gartner Magic Quadrant specifically, to tell us a bit more about these kinds of analyst creations and how they function. First up, we've got Steve Riley, a former Gartner analyst who helped author a number of research notes in his time there, including the Magic Quadrant for CASB, and the Market Guide for ZTNA. Welcome, Steve.
Steve Riley: Thanks, Max. Good to be here.
Max Havey: Glad to have you. And we've also got Mona Faulkner, whose entire job is analyst relations. Welcome, Mona.
Mona Faulkner: Thanks. Glad to be here.
Max Havey: So to start things off here, I think it would be helpful to sort of outline what is an MQ. A Magic Quadrant kind of gets thrown around a lot in industry conversations. So it'd be helpful to just outline why they exist and how does this work? Is it a pay-for-play opportunity? Like, Steve, can you take us through this a little bit?
Steve Riley: Yeah. Well, so let's dispel the myth first. These are not pay-for-play opportunities. In fact, as an analyst, I have no visibility into how much Gartner or how much vendor clients spend with Gartner. The idea there is keeping me ignorant of that information means that how much a vendor spends is much less likely to influence or not at all likely to influence how I would assess them. It's all about vendors creating products that clients are interested in and have expressed a desire to learn more. Now, how these things arise is that when an analyst or a group of analysts realizes that, hey, a new market is emerging, they'll first write a short research note just kind of describing what that is. And then, if it seems that market is gonna persist, the next thing that happens is maybe a couple of years of a market guide. Now, market guides describe what the market is, what problems it attempts to solve, the likelihood that it will succeed at something like that, the risks and benefits of adopting such technology.
Steve Riley: And it lists vendors in that market, but it isn't a comparison just yet. Then if that can survive for a couple of years, yep, that's a good sign that this is a market that has established itself, at least in the beginnings. And then, the analyst will switch to writing a Magic Quadrant. And that's what happened with the CASB. Craig and I watched the market guide for a couple of years, and then we decided it was time to create a Magic Quadrant for that. Now, Magic Quadrants exist to compare vendors in their respective markets. And it compares a couple of different things. One is the vertical axis, the execution axis. Some people think it's all about how much a vendor sells. And while sales is a component, there's like seven or eight other components that go into calculating where on the vertical axis a vendor might land. The horizontal axis is much more about vision, strategy, where is the vendor going to go in the future? How well will they compete against others?
Steve Riley: And again, there's several dimensions that constitute that. And by having these together, an enterprise, any Gartner client really, can use this information along with much other information, like also the critical capabilities note that looks at the tech itself, to determine if this is a vendor who they feel for their use cases is worth partnering and purchasing from, generally over a long term. People don't use MQs for year to year decisions. They use them for more strategic things.
Max Havey: So sort of a thing that helps people better understand the landscape of these capabilities and find the thing that will work best for what they want to be using. What is the best thing they should have on their mind as they're looking for new capabilities?
Steve Riley: Right. And that's why it's so important for clients to read all of the words in an MQ. I know it's tempting to look only at the picture and make a decision. And in fact, I had some clients ask me that one time, just tell me who to buy, to point on the picture who I should buy. I'm like, please, I wrote those words. They're very, very important. Each one of them is lovingly crafted. But this information in the words that aren't necessarily always conveyed in the graphic. So use the whole thing, add it to the SEC, add it to conversations with analysts too. This is how you make a well-rounded purchasing decision.
Max Havey: Absolutely. And that sort of thing takes a lot of information. So that's kind of the other side of the coin on this Mona. Can you tell us a bit about what an analyst relations person does when working with these analyst groups? Can you tell us, what is this role all about as it relates to this research?
Mona Faulkner: Yeah. It's such a great question and it's such a loaded question. And I think I still have to explain to my mom, what do I do for a living every day? [chuckle] The crux of this is if you are doing your job well in the world of analyst relations, you are providing the most quality information to an analyst that is as accurate as possible so that they can do their jobs well. And what is their job? Their job, just to add to what Steve was saying, is to serve their clients with the most accurate up-to-date information about a space, about specific vendors, and what an organization can do with that information to make the right decisions for them. So if you look at the MQ graphic or a Forrester Wave graphic or an IDC MarketScape graphic, you'll see a bunch of dots. But that doesn't mean that a vendor in a specific position is the end-all be-all for an organization. It depends on where an organization is and what their needs are. And so, a graphic can tell quite a big story. A lot of the details, to Steve's point, that's in the write-up of the report tells an even bigger story. It explains the market trends directionally, what the challenges are, what organizations are facing, what are their biggest pain points.
Mona Faulkner: And we all know one organization's pain points are different than another's. So you don't take a look at an MQ, or Wave, or a MarketScape and say, this is cookie cutter for all organizations. That would not be fair or ever accurate. So for an AR professional to do their job well, they need to understand an industry analyst's needs, the markets that they're covering, and help their jobs be so much easier. There is a wealth of information that they could tap into within a vendor. How do you help them navigate that as quickly as possible so they can get their job done?
Max Havey: Yeah, to a degree, kind of helping cut through the buzz and sort of give them the signal for the noise in all the things that are happening in a given year.
Mona Faulkner: Exactly.
Max Havey: Well, in thinking about that then, Steve, as someone who has been in that analyst position, a person who's working on this sort of research, what does that process look like for analysts that are putting together something, say like an MQ? How deep into the technical weeds are you going? What does that process look like in sort of taking in this significant amount of information?
Steve Riley: Well, it begins with the analyst making the case to management that a market is worthy of an MQ because it is a six-month exercise from when you initiate until you publish. And that takes time for analysts who also have been many of the things. So first of all, make the case. Then once approval arrives, you just start the process, which is very well documented. All Gartner Research Notes have a methodology analysts must follow. So that's helpful if you're not trying to invent something out of whole cloth. Typically, the first thing that happens is the analyst will write a welcome pack, inviting a vendor to participate. And it includes things like inclusion criteria. Here are the things that you must have in order to meet the parameters to join the research. Usually, it's around things like certain capabilities that are considered core, a certain amount of revenue, not always, but sometimes a certain amount of deployments across companies or seats, these typical things.
Steve Riley: Now, sometimes people criticize the methodology for saying that MQs often are built such that they ignore startups. And that's absolutely not the case. There can be a sliding scale for these sorts of things, which is what Craig and I often tried to do, especially in the later days of the CASB MQ. Then when vendors respond, the determination is made whether they, the analysts determine whether the vendors meet those inclusion criteria. And then if they do, the next thing that vendors receive is a series of questions. It used to be a spreadsheet, now it's done online that evaluate the vendor in each of those evaluation criteria. Seven of them are, I'll never remember which, seven are horizontal and then eight are vertical, or maybe it's the other way around. It doesn't really matter, but there's 15 different criteria. And through multiple questions, the analysts can have an understanding of how the vendor performs in each of those criteria, which is then used to determine the position in the graph. That's an internal Gartner process that converts the analyst's opinions into the position. And I do have to state, and you can scroll to the bottom of every Gartner research note and find this, that the content in the note represents the opinions of the analysts. And that's true for all firms though, right? It's analysts who write these, it is their impression, their opinions.
Steve Riley: Now, we all strive as much as possible to keep these things fact-based because an opinion without a fact base is kind of pointless, but also the analysts rely a lot on prior experience. So you kind of asked that in your question Max, is how far technically do analysts go, it depends on which part of Gartner they're a member of. So I was in Gartner for IT leaders. My audience was for cloud security research, my audience was CISOs and CIOs. That's who I wrote for, which means no, I didn't like install every product I wrote about. I might