TL;DR
AI is already embedded across enterprise environments, often without a deliberate decision to adopt it. And while boards are asking for speed and innovation, security teams are left managing the consequences of systems they no longer fully control.
In this collaborative essay, Jonar Marzan (Group Cyber – Security, Risk & Compliance Manager, Coles Group) and Dan Haagman (CEO, Chaleit) explore how AI has fundamentally disrupted responsibility, governance, and accountability in cyber security.
From vendor-driven adoption and blurred ownership to broken data foundations and rising operational noise, they unpack why existing security models no longer fit and what needs to change if organisations want AI to be useful, not dangerous.
Context: AI arrived before organisations were ready
AI did not enter organisations through a single decision or a clearly funded program. It arrived bundled into cloud platforms, productivity tools, and vendor updates. One week, it was optional. The next, it was simply there.
Dan explains that if you are using mainstream enterprise platforms today, you are already using AI. There is no separate approval step. No clean line between before and after. That reality matters because cyber security still operates on assumptions built for a very different model of technology adoption.
For Jonar, this disconnect shows up every day in enterprise settings. AI discussions often begin once the board is already concerned, not before. Governance groups form reactively, controls are discussed after pilots are already live, and the result is not deliberate adoption, but constant catch-up.
What makes this moment harder than previous waves of technology is not just speed. It is where control now sits.
Data is no longer only stored by third parties. It is processed, analysed, and reused by them. AI tools do not simply support business activity, they participate in it.
Security teams remain accountable when things go wrong. Yet their authority over the systems that matter most has narrowed. And risk accumulates.
This sets the stage for a set of structural rather than technical challenges.
Challenges
AI has added new risks but also exposed weaknesses in how security, governance, and accountability are structured. These challenges show up in daily decisions, growing workloads, and increasing pressure on teams who are still expected to keep everything safe.
Challenge #1. AI adoption without strategy
Across large organisations, AI activity is often fragmented. Teams experiment separately. Pilots multiply and proofs of concept pile up. What rarely exists is a unifying strategy that explains why AI is being used, where it fits, and what risks are acceptable.
Jonar has seen this pattern.
Many organisations can point to dozens of AI initiatives, yet struggle to answer a basic question: where is this going?
Without clarity, prioritisation becomes impossible. Risk decisions are made in isolation. Cyber security, privacy, and compliance teams are left reacting to choices they did not shape.
Dan observes that this absence of strategy is not accidental. The pace of vendor-led deployment has created a world where organisations feel pushed to respond rather than plan.
AI becomes another urgent item competing for attention, not a capability grounded in long-term intent.
The data support the experts’ observations. Enterprise surveys indicate high rates of AI pilots lacking strategic alignment, with MIT research showing 95% of generative AI pilots failing to deliver measurable business results due to issues like poor integration and no clear business outcomes.
Gartner forecasts that over 40% of agentic AI projects will be canceled by 2027 due to escalating costs, unclear business value, and inadequate risk controls, including ownership gaps.
This lack of strategy feeds directly into the next issue: loss of control.
Challenge #2. Loss of control in a vendor-driven ecosystem
Cloud adoption already required organisations to give up a degree of direct control. AI extends that dependency further.
Jonar recalls discovering AI features appearing inside enterprise tools without prior approval. A browser plugin here, a collaboration feature there. Each change is small on its own, yet together they create a pattern in which security teams are no longer deciding what enters their systems and are responding after the fact.
He describes it as a whack-a-mole problem. Controls can be applied at network boundaries, but features embedded inside vendor platforms move faster than those controls can keep up.
Dan connects this to a deeper issue. Organisations once built and ran their own systems, and they understood what they owned. With cloud and AI, they have placed critical parts of their operations into ecosystems that operate on different incentives and timelines.
From an attacker’s point of view, the boundary no longer matters. Weaknesses anywhere in the chain become relevant. Yet responsibility for failure still lands with the organisation whose name appears in the headline.
Challenge #3. A responsibility model that no longer fits
AI blurs the lines of responsibility.
Jonar asks an important question: where does responsibility sit when vendors both host and process data?
In AI contexts, providers are not passive custodians. They actively train and refine systems using customer data flows.
Dan takes this further. Even when contracts include safeguards and addenda, accountability does not shift in the public eye. If customer data is exposed, the organisation remains answerable. Legal remedies do little to repair trust or recover lost opportunity.
This creates a mismatch. Organisations carry accountability without holding equivalent control. And security teams inherit risks they cannot fully manage.
As responsibility blurs, foundational weaknesses become harder to ignore.
Challenge #4: Weak foundations exposed by AI
Boards often ask what the organisation is doing about AI. Jonar’s response is consistent: before asking about AI, ask about data.
Many organisations still lack clear data ownership, consistent classification, or reliable controls over sensitive information. AI does not correct those gaps but magnifies them.
Jonar frames AI as another layer in the stack. Without strong data governance, security, and privacy practices, that layer rests on unstable ground.
Organisations rush to add advanced capabilities while basic construction remains unfinished. Attention goes to what is visible, not what holds everything up, Dan warns.
These challenges create pressure, noise, and fatigue inside security teams, which begs the question of what can realistically be done.
Solutions
Below are practical steps organisations can take to reduce risk and regain clarity. The solutions focus on discipline, foundations, and using AI in ways that support people rather than overwhelm them.
Solution #1. Anchor AI in strategy before scaling activity
Both experts return repeatedly to strategy as a discipline.
AI activity should flow from business intent.
That means agreeing upfront where AI adds value, where it does not, and what trade-offs are acceptable. Cyber security, privacy, and risk teams must be involved early, not asked to approve decisions already made.
Jonar stresses that strategy enables prioritisation. Without it, security teams are forced to treat every AI use case as urgent, even when risk differs widely.
How to gain strategic clarity and reduce fragmentation:
Define an organisation-wide AI strategy aligned to business goals
Link AI initiatives explicitly to cyber security and data governance objectives
Set clear thresholds for acceptable risk before pilots begin
Solution #2. Design security for ecosystems, not internal systems
Control no longer stops at organisational borders and security models must reflect that.
Dan argues that organisations need to accept partial loss of control and design accordingly. That includes recognising vendors as active participants in data processing, not neutral platforms.
At the same time, there needs to be ongoing engagement rather than static assessments. Annual reviews and questionnaires struggle to reflect continuous change, Jonar highlights.
How to set more realistic expectations around responsibility:
Treat AI vendors as ongoing risk partners, not one-time suppliers
Update third-party risk processes to account for continuous feature changes
Focus on transparency and communication, not just contractual safeguards
Solution #3. Rebalance accountability with integrity
Contracts alone cannot resolve the accountability gap. What matters is how organisations and vendors behave when risk materialises.
Dan describes holding his own company to standards higher than those demanded by customers or regulators.
Continuous internal testing, rotating scope, and written accountability create discipline that paperwork alone cannot.
Jonar agrees that integrity on the supply side matters more as AI increases leverage. Shortcuts compound risk quickly.
Integrity does not eliminate risk, but it changes how risk is managed. Here’s what to do:
Hold internal teams to standards above minimum compliance
Prioritise vendors who demonstrate transparency and ethical practice
Recognise that reputation risk outweighs contractual protection
Solution #4. Use AI to reduce cognitive load, not replace judgment
AI has clear value inside cyber security when used correctly.
Dan offers an analogy that captures the balance well. In aviation, automation reduces workload so humans can focus on oversight, context, and decision-making. It does not remove responsibility.
This connects directly to cyber operations. In areas like SOC work and third-party assessment, AI can summarise, correlate, and surface patterns. Final judgment remains human.
How to restore balance between speed and certainty:
Deploy AI to manage volume, not make final decisions
Keep humans accountable for outcomes
Avoid systems where AI evaluates AI without oversight
This restores balance between speed and certainty.
Solution #5. Fix the foundations before expanding AI use
Every solution returns to the same base: data.
Jonar is clear about this:
Data governance, data security, and data privacy must be in place before AI governance can work. Without them, organisations respond to board pressure without the controls needed to manage risk.
Dan reinforces this by pointing out that visible progress often distracts from structural weakness.
To avoid this, be careful to:
Establish clear data ownership and classification
Strengthen controls over sensitive information
Align privacy practices with actual data use, not policy statements
Strong foundations do not slow AI adoption, but are essential to make it safer.
Key takeaways
The experts offer a useful lens for leaders who want to make sense of AI’s impact on cyber security without oversimplifying the work involved. Here are a few insights to remember:
AI is already embedded in enterprise systems, often without a clear decision to adopt it.
Pilots without strategy create risk that spreads faster than governance can respond.
Cloud and AI have reduced direct organisational control, but accountability still remains.
Weak data governance is exposed quickly once AI is introduced.
AI works best when it reduces cognitive load and supports human judgment.
Integrity and discipline on the supply side matter more than contractual safeguards.
Strong foundations in data security, privacy, and ownership are non-negotiable.
Safe AI adoption relies on a clear strategy, strong foundations, and an honest engagement with suppliers. Organisations that get this right will be able to stay in control without compromising progress.
Chaleit works with organisations to bring clarity to complex security problems, focusing on what actually reduces risk, not what simply looks reassuring. If you’re facing any of the challenges highlighted in this article, we can help.
About the authors
Jonar Marzan
Jonar is a cyber security leader who makes security a business enabler, not an afterthought. He helps senior leadership understand cyber risk, implement strategy, and build resilient, value-driven cyber practices.
With 15 years’ experience across financial services, fintech, government, technology, telecoms, automotive, BPO and retail, he specialises in cyber strategy, governance and risk, enterprise security architecture, cloud, identity and data protection – enabling organisations to reduce risk, strengthen resilience, and drive digital transformation.
He’s worked with EY, IBM and a cloud security startup acquired by Bitdefender. Currently, he’s a Group Cyber – Security, Risk & Compliance Manager at Coles Group focused on cyber governance, risk and compliance initiatives.
Dan Haagman
Dedicated to strategic cyber security thinking and research, Dan Haagman is a global leader in the cyber domain — a CEO, client-facing CISO, Honorary Professor of Practice, and trusted advisor to some of the world’s most complex organisations.
Dan’s career began nearly 30 years ago at the London Stock Exchange, where he was part of the team that developed its first modern Security Operations Centre (SOC). He went on to co-found NotSoSecure and 7Safe, both acquired after helping shape the industry’s penetration testing and training practices.
His deepest commitment is to what followed: Chaleit — a company that has become Dan’s life’s work and passion. Founded not just to participate in the industry, but to elevate it, Chaleit brings together deep offensive testing capabilities and mature consulting, helping clients move from diagnosis to resolution.
Today, he leads a globally distributed team across seven countries, steering Chaleit with principles of longevity and transparency, and guiding it toward a future public offering.
Dan is also the founder of the CISO Global Study — an open-source initiative created for the benefit of the broader industry. Through it, he works alongside hundreds of CISOs globally, distilling insight, exchanging learning, and challenging the assumptions that shape the field.
Disclaimer
The views expressed in this article represent the personal insights and opinions of Dan Haagman and Jonar Marzan. Dan Haagman's views also reflect the official stance of Chaleit, while Jonar Marzan's views are his own and do not necessarily represent the official position of his organisation. Both authors share their perspectives to foster learning and promote open dialogue.




