Cyber security is falling short in response to AI adoption. As businesses move rapidly to integrate AI, cyber security leaders are being challenged to rethink assumptions. Traditional models are being tested in new ways that demand a new approach.
While organisations move between AI maximalists promising transformation and minimalists predicting disruption, the reality is more nuanced. The future isn't built at the poles of an argument, but on the complicated middle ground where real problems are actually solved.
During a recent live discussion following their collaborative essay "AI, Quietly Everywhere", Vannessa van Beek, CISO at Fortescue and AI researcher at Murdoch University, and Dan Haagman, CEO of Chaleit and Hon. Professor of Practice at Murdoch University, explored the AI security challenges further.
Their conversation revealed important insights about where the industry stands and sparked new thinking about AI security testing, including a proactive method for understanding how AI systems fail before attackers exploit those failures.
Not your usual tech stack
One of the biggest challenges organisations face isn't just about securing another technology stack, it’s about securing systems that think, reason, and act autonomously in ways we can't fully predict.
"They don't always behave the same way every time you use them," Vannessa observed.
"There's kind of an unpredictability in how the models themselves work. And that's a challenge because in security, we'd like to put a control in place based on predictability."
This fundamental unpredictability means traditional approaches to security — those empirical X + Y = Z equations that cyber security professionals have relied upon — may not be enough anymore. The challenge extends beyond protecting data. It's now about protecting the intelligence that drives business decisions.
These new vulnerabilities can manifest in various ways, from AI systems being manipulated through carefully crafted prompts to perform unauthorised actions, to traditional security controls being circumvented through AI-assisted attacks. It’s an entirely new category of risk that most organisations aren't prepared for.
Shadow AI
Shadow AI proliferation is another big problem. Vannessa shared that she has been using AI visibility tools across her organisation and discovered "a whole mushrooming of shadow AI happening." People are using personal credit cards to sign up for AI services, experimenting with tools that may be storing sensitive data in uncontrolled environments.
This isn't necessarily malicious — it's enthusiasm. For example, Dan mentioned that one of Chaleit’s VPs built his own LLM because he couldn't keep the context of code reviews in his mind. The solution wasn't to shut it down, but to embrace and enable it safely.
The key insight, as Vannessa pointed out, is that "not all data is equal." Organisations need to be pragmatic about risk while still enabling innovation.
This changes how we think about supply chain risk management, especially when AI applications can be purchased on a credit card and implemented without traditional procurement oversight.
From chaos theory to security chaos engineering
During the live discussion, Vannessa introduced chaos theory into cyber security thinking. "This idea that we do a single pen test a year will no longer work in this very unpredictable, volatile, and constantly changing environment. So we need more regular testing."
The observation sparked what Vannessa described as an "aha moment": the realisation that traditional testing approaches are fundamentally inadequate for AI systems.
Drawing from her research, we turn to Security Chaos Engineering (SCE). Security Chaos Engineering, first pioneered by Aaron Rinehart, is a proactive approach that evolves traditional penetration testing by injecting faults and attacks into systems to identify weaknesses before adversaries do.
Vannessa explains the core principle:
"There’s a quiet truth emerging in cyber security circles — one we don't talk about often enough: AI will fail. It won’t always be obvious. It won’t always be catastrophic. But it will happen. And the question is not if AI will behave unexpectedly — but when, how often, and how well we've prepared."
When applied to AI systems, especially in cyber security contexts, SCE becomes crucial due to the non-deterministic nature of AI, where outputs may vary despite identical inputs, and behaviour may evolve over time.
Why AI breaks differently
Unlike classical systems, AI introduces new layers of complexity that traditional security approaches struggle to address. As Vannessa outlines, AI systems are:
Probabilistic: Responses can vary even with identical inputs.
Opaque: The logic behind decisions is often difficult to trace.
Dynamic: Models evolve due to drift, retraining, or new data exposure.
This creates challenges for cyber security. Traditional controls and validation rules aren't sufficient when dealing with systems that behave probabilistically rather than deterministically.
SCE offers a proactive strategy to assess how AI behaves under pressure:
"We run hypothesis-driven experiments. We inject adversarial prompts. We simulate data drift. We measure how systems degrade — and how quickly they recover," Vannessa explains.
The approach doesn't replace security controls but validates and hardens them. Critically, it also exposes shadow AI: the unsanctioned tools employees use, the models fine-tuned in isolation, and the scripts running outside of IT’s view.
Vannessa also introduced the concept of "security attach". While the ideal is to implement security by design, there is recognition that real-world constraints sometimes force a reactive approach — attaching controls to an already deployed AI solution. This approach is known to incur greater cost and complexity; studies have shown that retrofitting security after deployment can cost an order of magnitude more than integrating it during development.
This acknowledges the reality of how organisations actually adopt technology: organically, enthusiastically, and often ahead of formal governance processes. As Dan noted, organisations need minimum viable security positions that can evolve with the technology.
Resilience through designed failure
Vannessa's research on Security Chaos Engineering changes the way we think of AI:
"Of course, we should expect failure. We don't expect planes to never hit turbulence, we expect them to recover. AI is no different. The unpredictability isn't a flaw. It's a feature that forces us to be sharper, more adaptive, and more accountable in how we design security."
This creates a feedback loop that turns uncertainty into insight. Over time, it builds confidence, not because everything is perfect, but because organisations have tested what happens when it's not.
SCE creates what Vannessa calls "observability under duress": a core tenet of resilient design that becomes essential when dealing with AI systems that can behave unpredictably.
Ship and shipwreck
Vannessa shared a powerful metaphor, drawing on theorist Paul Virilio's insight that "The invention of the ship was also the invention of the shipwreck, but it was also the opening of new markets, new economies, and new trading routes." Virilio's view captures both sides of technological advancement: that every innovation brings new failure modes alongside new opportunities, creating an essential balance in how we understand progress.
AI is our new ship in uncharted waters. It could end in shipwreck or open up entirely new possibilities.
But as both experts acknowledged, the industry has navigated transformations before.
"We did this with the cloud. We did this with the birth of telephony, with the birth of mobile data, with the launch of the web browser." As Vannessa encouragingly put it, "anyone who's led an organisation from on-prem to cloud has done a similar kind of journey."
The difference this time is the amplitude of change and the speed of oscillation. Organisations aren't just steering, they need to be steering fast enough while building the capability to test and validate their approach continuously. And leadership plays a crucial role in this process.
Leadership paradigm shift
Both experts identified that the biggest barrier to scaling AI isn't employee readiness, it's leaders not steering fast enough.
Dan has observed this in organisations trying to manage AI from a pure compliance perspective, only to realise they're falling short the moment they try to assess for compliance.
As Vannessa asked, "Have the security teams kept pace with the AI teams, and have the AI teams been engaging close enough with the security teams? Because no one will use AI unless they trust the AI. They trust the AI generally through the security controls that are in place."
This requires cyber security leadership to be enablers, not just gatekeepers. The most effective approach involves what Vannessa calls "one coffee conversation after another" — influence through partnership rather than control through prohibition.
Let’s move from asking, "How do we stop this?" to, "How can we enable this safely?" It's about creating isolated environments for experimentation, applying pragmatic risk lenses, and then bringing successful innovations into the enterprise with appropriate controls attached.
Key takeaways
The collaboration between Dan and Vannessa revealed several valuable insights for cyber security professionals:
Embrace unpredictability. Traditional security models based on predictable behaviours won't work with AI systems that reason and act autonomously. New approaches must handle inherent unpredictability and continuous change.
Visibility drives everything. Organisations can't secure what they can't see. Investing in AI visibility tools to understand shadow AI proliferation is crucial before implementing controls.
Test for failure. Security Chaos Engineering provides a proactive approach to understanding how AI systems fail under pressure, building resilience through designed experimentation.
Pragmatic risk enablement. Not all data is equal, and not all AI use cases carry the same risk. Focus should be on protecting critical data while enabling experimentation through isolated environments.
Security attach when necessary. When secure by design isn't possible due to business realities, organisations must be prepared to wrap appropriate controls around existing implementations post-build.
Leadership through partnership. The biggest barrier isn't technical, it's leadership not steering fast enough. Effective AI adoption means moving from gatekeeping to enabling. As McKinsey notes, leadership — not employee skill — is the top impediment to scaling AI. Success comes from influence through collaboration, coffee conversations, and empowering rather than controlling.
As Dan and Vannessa discovered, working problems together reveals new perspectives and solutions.
The chaos organisations are experiencing isn't just a challenge but an opportunity to fundamentally improve approaches to cyber security in an AI-enabled world. The industry has the capability to deal with these challenges, but it requires applying that capability with urgency, adaptability, and a continuous testing mindset.
As Vannessa concluded, "It's about preparing for imperfection" while building systems that can recover and adapt.
AI systems think for themselves, so maybe the most human response is to think more creatively about how we secure them.
About the authors
Vannessa van Beek
With over 30 years of industry experience, Vannessa van Beek is a Cyber Risk Leader specialising in Security and Transformation. Her background in law, business administration, and psychology provides her with an in-depth understanding of technology, risk, strategy, and organisational culture. She possesses a unique combination of strategic, operational, and leadership skills that enable her to build enduring relationships.
Throughout her career, Vannessa has developed and implemented strategies for growth and delivery performance at DXC Technology and Telstra. Most recently, she led cyber security capability across various sectors at Kinetic IT and Avanade. She has also led high-impact cyber resilience strategies for cyber transformation programs, while managing internal and external relationships, including board reporting.
Vannessa is driven by a passion to secure organisations' digital transformation and leads with vision and courage. Her excellence has been recognised through multiple awards, including the Women In Security Awards in 2024 and the WiTWA Tech [+] Outstanding Senior Leadership Award for excellence in leadership, communication, and strategic vision. In 2024, she accepted the CSO30 award for the Leadership Category fostering a security conscious culture.
Vannessa is a conference speaker on Cyber Resilience & AI and serves as an adjunct Senior Lecturer at Murdoch University.
Dan Haagman
Dedicated to strategic cyber security thinking and research, Dan Haagman is the CEO and founder of Chaleit, a seasoned leader in global cyber security consulting, and an Honorary Professor of Practice at Murdoch University in Perth, Australia.
With nearly 30 years of experience, he began his journey at The London Stock Exchange, where he pioneered the development of their first modern SOC and defensive team. As a co-founder of NotSoSecure and 7Safe, both acquired by reputable firms, Dan has left a lasting impact on the industry.
Today, Dan leads a team of brilliant minds in seven countries, all focused on delivering world-class cyber security consulting. Chaleit reflects Dan's vision for the industry's future. Built on the core principles of longevity and transparency, the company is poised for a public offering within the next few years.
Dan has a passion for learning. With a pen and paper at hand, he dedicates significant time to reading, researching, designing systems, and learning with clients and peers with the goal of being a leading thinker and collaborator in the cyber industry.
Disclaimer
The views expressed in this article represent the personal insights and opinions of Dan Haagman and Vannessa van Beek. Dan Haagman's views also reflect the official stance of Chaleit, while Vannessa van Beek's views are her own and do not necessarily represent the official position of her organisation. Both authors share their perspectives to foster learning and promote open dialogue.