Last week I was in Salt Lake City presenting at the University of Utah's AI in Business Symposium, a room full of executives and business leaders from across the state, all wrestling with the same question in slightly different forms: What do we actually do about AI?
Standing up there, I kept thinking: this is exactly the conversation every security team needs to be having internally. Not a policy meeting. Not a vendor briefing. A real conversation about where we are, where our adversaries are, and what the gap between those two things is going to cost us.
I titled the talk "AI Isn't the Threat. Your Reaction Is." I want to bring that same conversation here.
When generative AI exploded in 2023, every security leader I knew faced a version of the same moment: someone in legal, HR, or the C-suite walked into their office, or more likely sent a Slack, and asked, "What are we doing about AI?"
What happened next defined a lot of careers. Some leaders defaulted to "no." Build the blocklist, draft the prohibition policy, send the all-hands email. Problem solved. Others swung the opposite direction: let it rip, figure out governance later. Both approaches created real damage, just different kinds.
The leaders who are winning today did something harder. They got curious before they regulated. They built coalitions before they wrote policies. They applied rigor to a genuinely novel problem instead of reaching for a familiar answer to a question they hadn't fully understood yet.
I've lived this. At PayPal and at Qualtrics, I made decisions in the middle of the AI acceleration that I'm proud of, decisions I'd redo differently, and a few I'm still sorting out. Here's what I've learned.
The Threat Model Has Already Collapsed
To understand why AI governance matters so much right now, you need to understand how dramatically the threat landscape has changed, and how fast.
Twenty years ago, a sophisticated cyberattack required a real team: people with specialized skills in reconnaissance, exploitation, lateral movement, data exfiltration. That team had to coordinate. They had overhead. Attacks were expensive, and that expense provided a natural filter. It didn't make us safe, but it kept the noise manageable.
Then the marketplace model emerged. You could buy access to compromised systems, rent malware-as-a-service, purchase stolen credentials in bulk. The barrier didn't disappear, but it fragmented. Attackers no longer needed every skill in-house; they just needed capital and connections.
AI is collapsing what remains of that barrier. Today, someone with no technical background and a credit card can generate convincing phishing emails, clone voices, create video impersonations, and automate target research at a scale that would have required a mid-sized red team five years ago. The sophistication floor has dropped to nearly zero. The volume ceiling has risen to nearly infinite.
And unlike defenders, attackers don't have corporate legal teams, HR policies, or compliance frameworks telling them to slow down.
You cannot hide behind "good enough" in this environment. Security by obscurity is gone. The question is no longer whether AI will be weaponized against your organization. It already is.
You Cannot Wish AI Away
Here's the part that keeps security leaders up at night: your employees are already using AI. Extensively. Often in ways you can't see.
Qualtrics' own employee experience research found that nearly half of employees use AI tools daily or weekly. Of those, only about 20% rely exclusively on company-provided tools. The other 80% are mixing in tools their IT and security teams didn't approve, didn't assess, and in many cases can't see. Only about half of employees say their organizations provide any AI training or ethical guidelines at all.
Shadow AI is not theoretical. It is happening right now, in your organization, probably in multiple business units, almost certainly involving data you care about.
The data risk here is specific and underappreciated. When an employee pastes a customer contract, internal strategy document, or employee record into a consumer AI tool, that data doesn't stay local. It travels outside your system of record, outside your data governance controls, outside your contractual obligations, and in most cases, outside your visibility entirely. The threat isn't just that the output might be wrong. It's that the input has now left the building. I've watched organizations spend years building sophisticated DLP controls only to have employees casually route sensitive data around all of it through a chat interface. That's not a technology failure. It's a governance failure that a technology strategy could have prevented.
The instinct to block is understandable. But think through what blocking actually produces. You push usage into personal devices. Into personal accounts. Into tools with no enterprise data agreements and no audit trail. You don't eliminate the risk; you just lose visibility into it, which is materially worse.
Blocking AI doesn't reduce risk. It moves risk to a place you can't manage it.
A Workforce That Doesn't Use AI Can't Defend Against It
There's a second-order problem that security leaders often miss, and it's the one that concerns me most.
If your employees have never worked with AI-generated content, never crafted a prompt, never noticed the subtle artifacts in AI output, never experienced the uncanny-valley quality of a synthetic voice, they will not recognize AI-generated threats when they encounter them.
You can't spot what you've never seen.
A year ago, I spent $20 to prove this point. I created a deepfake of the Qualtrics president, an AI-generated video depicting him as a kind of "evil AI persona" sent from the future to warn employees about exactly this threat. I talked about it publicly at Fortune's Brainstorm AI conference. I wasn't trying to frighten people. I was trying to close a perception gap that I knew existed.
The price point matters. Twenty dollars. Not a sophisticated nation-state operation. Not a well-funded criminal enterprise. Twenty dollars and a consumer AI tool, and you can create something that a significant portion of employees will accept as real, especially if they've never been exposed to how these tools work.
And here's the uncomfortable math: if you've banned AI at your company, your employees have no frame of reference. They've never experimented with these tools. They've never made something fake and noticed how convincing it was. They've never developed the instincts that come from regular exposure. You've optimized them to fail the test.
Blocking AI doesn't just limit productivity. It actively weakens your human security layer, which remains the most important layer you have.
What I Did. What Worked. What I'd Change.
The right framework, in my experience, starts with the Core, Context, Commodity lens.
Not all AI use is equally important to govern. Some AI capabilities are genuinely core to your competitive position: the way your product uses AI, the way AI interacts with your most sensitive data. These need real governance, deep assessment, and ongoing oversight. Other uses are context; they matter, they require attention, but they're not make-or-break. And a lot of AI use is commodity. It's roughly as risky as using a search engine, and treating it like a critical security event wastes everyone's time and breeds contempt for your program.
When you're clear about which is which, you can build controls that are appropriately calibrated. Heavy governance for core. Reasonable guardrails for context. Sensible defaults for commodity. This sounds obvious until you've watched a security organization apply the same approval process to every AI interaction regardless of risk level, at which point employees route around the whole thing, and you've lost the plot.
Build a coalition first. Before a policy, have conversations with legal, with product, with engineering, with HR. Come with questions, not answers. What are you trying to accomplish with AI? What risks worry you most? What would "good" look like? That listening changes what you build. A policy that legal helped design gets followed differently than a policy legal receives in their inbox.
What I'd do differently today: I would have started the internal education earlier. The technical controls matter, but they're the easier half. The harder half is building an organization that understands the risk well enough to make good decisions at the edge, because the edge is where most decisions get made.
Three Obligations for Security Teams
Having laid out the landscape at the symposium, I want to be direct about where it leaves us as a security function.
First, we need to be adopting AI ourselves. If your tooling is pre-AI, you are in an asymmetric fight that you are losing. AI-assisted triage, detection, threat intelligence, LLM-based defense: these are not future investments. They are table stakes you're catching up to. Every security leader should be actively asking where AI can make their team faster, sharper, and less dependent on manual work that no longer requires human judgment.
Second, we need to be enabling the organization, not blocking it. Every time security says no without offering an alternative, it pushes usage somewhere it can't be seen. Shadow AI is a governance failure, not a technology failure, and it's one we have the ability to prevent. Govern to enable, not to restrict. Provide tools that are genuinely better than the free alternatives. Build policy with the people who will actually live under it, because imposed rules get bypassed and co-created rules get followed.
Third, we need to understand AI well enough to govern it. You cannot write good policy for something you haven't worked with. You cannot spot AI-generated threats if you've never created AI-generated content. Enablement is security. If your team is behind on AI literacy, you are not just limiting productivity; you are weakening the most important security layer you have, which is human judgment.
Building Guardrails, Not Walls
I'm not arguing for open doors. I'm arguing for the right doors.
The goal is guardrails that enable speed: controls that let your organization move fast with AI while maintaining meaningful protection for the things that actually matter. This requires being genuinely good at the basics: data classification, access controls, vendor assessment, logging and monitoring, incident response. If your fundamentals are weak, AI governance becomes impossible, because you don't have a clear picture of what's at risk.
It also requires accepting a truth that's uncomfortable but necessary: you cannot fight AI-enabled threats without AI-enabled defense. Your adversaries are using AI to scale their attacks. If your security tooling is pre-AI, you are bringing a different set of capabilities to an asymmetric fight. The investment in AI-native detection, AI-assisted triage, and AI-powered threat intelligence isn't optional anymore. It's catching up to table stakes.
The Path Forward
Here's the reality: AI is not going away. It is not a trend. It is not a phase. The organizations that treat it as a problem to be blocked rather than a capability to be governed will spend the next several years accumulating debt in risk, resilience, and relevance.
But I want to end on something other than alarm, because I genuinely believe we're going to get through this.
The next two to three years will be harder than the last two. The threat surface is expanding faster than most organizations can respond. Employees are ahead of their employers on AI adoption, attackers are ahead of both, and the governance frameworks we need are still being written. We will accumulate exposure as we build toward the world we need.
But we will build toward it. The security community has navigated discontinuous change before: the shift to cloud, to mobile, to DevOps. We didn't get those transitions perfect, but we got them right enough. We built coalitions, we adapted our models, we found the balance between enabling the business and protecting it.
The single biggest advantage any team can have right now is simply being early. Early to understand it. Early to use it. Early to adapt. The leaders who will come out of this period with credibility and impact are the ones doing that now, who understood AI before they regulated it, who partnered across the organization before they wrote the policy, who built the guardrails that let their companies move fast and still sleep at night.
That's the job. It's hard. It's worth doing.
Assaf Keren is a security executive with experience as Chief Security Officer at Qualtrics and CISO at PayPal. He writes about the intersection of security strategy, organizational dynamics, and emerging technology.
