
No matter what industry you’re in, if you are a technology or security professional, you’ve likely been inundated with questions about agentic AI and how it can either supercharge your workforce or turbo charge your threats exposure.
With a few years of experience under my belt, I wanted to write a blog with advice around finding the right way to approach and secure whatever is the latest “shiny new thing”… but it’s the holiday season, so I would be remiss not to labor a fun festive analogy.
So here’s some holiday-themed advice I can offer about securing and enabling agentic AI.
Make a list and check it twice
The best place to start is to make sure you’re listening to what the organization wants to use.
Sorta how everyone makes a Christmas list, you need to know what applications and tools your organization wants to be using. As we’ve found with many shiny new tools over the years, people will find a way to use what they want whether security knows about it or not. This year, agentic AI is probably very high on those wishlists.
As we all learned with generative AI a few years ago, there’s no sense in just blocking these sort of tools sight unseen. Instead, it’s best to find a way to enable them accordingly. But please remember, while it’s a key tactic among children on Christmas Eve night, hope is not a strategy in cybersecurity. As a security leader, you need to have a plan for how you’re going to enable and secure these tools and applications. And that means starting with visibility.
Know who’s naughty or nice
If there’s one thing we know about the big man in red, it’s that he is always watching. Santa has constant visibility into who is naughty and nice, and you want to be mirroring this kind of visibility when adding any new tool to your organization.
This is no less important for agentic AI solutions, which move very fast once they’re implemented and can lead to major exposure or even abuse if not configured properly.
Make sure you know what apps have access to your organization’s data and how it is being used. From there, you need to have the proper policies in place to make sure you control access and usage of sensitive or confidential data.
Say I deploy a bunch of agents within a company and one of those agents gets instructed to do something wrong. It’s important for us to have visibility as a starting point, but we still need to venture into uncharted territory and discover how much damage has really been done.
And to do that, you need the right folks leading the charge… which leads to my next tip:
Let the reindeer pull the sleigh
As a CISO, I am a big believer in a developer-led approach when it comes to finding ways to secure and enable AI.
When you’re dealing with such a new area of evolving technology, there will always be limitations to what you personally can know. That’s why you need to be able to turn to a trusted team to help move things forward, just as Santa does with his trusty team of reindeer pulling the sleigh.
I rely on my developers to help ask and answer the tough questions that come up. Questions like “What happens if a malicious agent goes rogue?” We need to be able to work out, How do I investigate that? Do I go to the server? Do I go to the server and capture the memory? Do I use traditional forensics and incident response characteristics to do this or do I have to rethink the entire process? Do I have to use something totally different?
None of these are easy questions to answer. But hey—nor is it easy to be a flying reindeer! But starting the team on this mindset now—before agentic AI feels out of control in the organization—enables them to start thinking and feel comfortable working through these sorts of questions if and when they ever wind up in this situation.
Build a team of hardworking elves
At the same time, we also know one of the big reasons why Santa is so jolly is because he has a trusty team of elves who do the important work to keep things running smoothly.
Beyond just developers, another area I would recommend to every security team is to build out your AI Ambassadors. These are the hardworking team members across the departments in the organization who focus on sharing information from the security team and AI governance committee with their team. Also like elves, they can take on some more of the work by reviewing the proposals before they come to the AI governance committee as well as the security team for review.
A team of trusty AI ambassadors will help give you an internal layer of visibility, while also helping to contextualize some of the key best practices every department should be using when it comes to new AI tools.
May your days be merry and bright (and secure)
The holiday season and the new year can be a stressful time, especially if you’re trying to implement and secure those new tools your organization has been asking for. But if you know what kinds of tools you’re dealing with, making sure visibility is built into your plan from day one, and you have developers leading your approach, you’re at least starting off on the right foot.
Just make sure you remember to leave out plenty of milk and cookies (or pizza and energy drinks) for the people helping make these plans a reality.

Lea el blog