Skip to NavigationSkip to Content

15 Dec 2025

readStrategy

15 min reading time

AI: The New Superuser in Your Organisation

Kane Narraway Canva cyber security AI

TL;DR

Most commentary on AI risk focuses on model behaviour, including hallucinations, prompt attacks, and unexpected outputs. But according to Kane Narraway (Head of Enterprise Security at Canva) and Dan Haagman (CEO of Chaleit and Author of the CISO Global Study), these issues are only a fraction of what security teams actually need to contend with.

The real pressure is coming from somewhere far more familiar: service accounts, uncontrolled integrations, and fragmented workflows stitched together by AI systems that operate faster and more broadly than anything we’ve seen before.

Kane has been analysing how AI tools connect to internal systems, while Dan has been studying how CISOs worldwide are responding to AI in their organisations. When they compare notes, there’s a pattern: the risks are immediate and already embedded in everyday work. 

Security teams are dealing with old problems, but at a scale and speed that pushes them into new territory. This essay lays out the challenges both experts see emerging and the solutions available now to address them.

Context: AI is already inside your organisation

Ask most organisations whether they have “adopted AI”, and many will say not yet, or “we’re experimenting”. That is no longer an accurate picture.

“People are going to be using it anyway. And if you’re not supplying something safe, they might host their own,” Kane notes.  

Employees use AI because it helps them move faster. Engineers rely on coding assistants. Customer-facing teams use AI to draft communications. Leaders use it to summarise documents. Even those resisting AI indirectly use it through the tools they already rely on.

“We’ll log in one morning, and there it is already done for us. I’ve seen this in Slack and other tools — no one asked for it,” Dan observes.  

From Jira to Google Docs to productivity suites, AI is already wired in. These tools now act as brokers, passing information across systems, sometimes with permissions the security team never reviewed.

You may not have adopted AI — but your tools have.

And because AI depends on deep integration, it touches an enormous amount of organisational data. Customer records, support logs, internal documents, development pipelines, storage buckets. The connective tissue is widening while oversight remains narrow.

Security teams face a familiar set of issues — identity sprawl, shadow systems, privilege overshoot, fragile supply chains — but AI amplifies them. Not because the technology is out of control, but because the way people deploy it often is.

With that context, we examine the core challenges that Kane and Dan see inside today’s organisations.

Challenges

AI doesn’t create risk in isolation. The pressure comes from how it is introduced, connected, and given access inside organisations. The challenges below reflect what security teams often encounter before they realise it.

Challenge #1. AI is amplifying service-account sprawl and privilege exposure

AI systems need access to large amounts of data. They read, summarise, search, monitor, and interact with multiple tools on behalf of employees. That means they rely on long-lived service accounts, often with permissions broader than any single human user would ever receive.

“LLMs need long-term access to wide swathes of data stores. It’s hugely risky. I’m already seeing companies with service accounts growing faster than the number of people,” Kane explains.  

Dan adds: “We’re talking about unchecked privilege through service accounts.”

Here’s why this is a concern:

  • Service accounts rarely follow the same access hygiene as human accounts.

  • Their permissions are not reviewed as often.

  • They persist through organisational changes.

  • Multiple integrations inherit the same token, multiplying exposure.

  • Users commonly retain access to these service accounts, creating an easy path around human-level controls and approvals.

AI tools extend this problem. A single token connected to an AI assistant may grant access to customer support logs, Git repositories, file storage, ticketing systems, and internal APIs.

That is a staggering concentration of access.

And this problem expands quickly once people begin using AI tools without waiting for the organisation to formally support them.

Challenge #2. AI is driving the fastest-growing form of shadow IT

Shadow IT has existed for decades. What’s new is how deeply AI integrates with organisational systems.

In previous eras, an unauthorised tool might have stored files or conversations. With AI tools, employees grant permissions that allow automation, decision-making, and cross-system actions.

Kane sees this regularly: “If you’re not supplying a safe option, they might host their own.”

The data confirms this alarming trend.

Conducted in 2025 among U.S. professionals, a WalkMe survey revealed that 78% of respondents used unapproved AI tools, while 51% faced conflicting guidance on usage. Only 7.5% received extensive training. Also in 2025, Microsoft found that 71% of over 2,000 UK employees surveyed used shadow AI, often for email, reporting, and finance tasks.

Another 2024 survey of 1,500 security leaders and employees across the U.S., U.K., Canada, Australia, New Zealand, Singapore, and India, showed that over 80% used unapproved AI tools, with nearly 90% of security professionals doing so. Half reported regular use, and executives had the highest adoption rates.

Shadow AI includes:

  • Self-hosted LLMs on personal machines.

  • Browser extensions with vague permission models.

  • AI plug-ins that request access to email, calendars, or cloud storage.

  • Vendor-enabled AI capabilities that appear without warning.

The main risk is the creation of unmonitored data flows between systems, where AI tools act as bridges. A developer grants access “just to test something”. A team uses an unofficial integration to summarise tickets. A manager experiments with an AI plug-in for meeting notes. Each instance creates a new node in a chain no one fully understands.

Once shadow systems are in place and deeply connected, the next problem becomes inevitable: people start allowing AI tools to perform actions autonomously.

Further reading: The shadow AI problem was also discussed in a collaborative essay with cyber security leader Vannessa van Beek on AI Chaos Engineering

Challenge #3. Autonomous AI actions are bypassing safety checkpoints

The attraction of AI is speed. Tools that generate code or perform actions automatically can remove hours of manual work. But speed without review introduces a new category of operational risk.

Kane gives a clear example: “YOLO mode… go out, do all the things, no human check. It can be powerful, but it could also delete your database if you connected it to production.”

Dan connects this to aviation, where automation has existed for decades: “In aviation, automation has been around for ages, but the gates and checkpoints are still human. With some AI tools, those checkpoints are disappearing.”

This raises several risks:

  • AI interprets instructions literally rather than contextually.

  • Hallucinated commands can be executed without pause.

  • Tools may chain actions together in ways engineers didn’t expect.

  • Engineers may assume an AI-generated sequence is correct without reviewing it.

It’s increasing levels of autonomy without established safety boundaries.

Dan explains the human dimension in this scenario:

People get tired. They cut corners. They take more risks when career opportunities are in flux. AI doesn’t understand these dynamics — it simply executes.

And even if you control internal autonomy, the external dependencies powering AI create a different issue entirely: supply-chain fragility.

Challenge #4. AI’s supply chain is fragile, opaque, and easy to compromise

AI systems rely on open-source packages, intermediary servers, and small components built by individual developers. These components inherit all permissions granted to the AI tool.

“Someone can offer an open-source developer a bit of money for a package used everywhere, and they might just hand it over. We’ve seen that happen,” Kane says.  

Dan points to the fate of highly sensitive data when companies collapse: “Think of the most sensitive information held by businesses that later went into administration. Your credit card doesn’t matter compared to your DNA.”

The point is simple: AI depends on components that are neither stable nor transparent.

An MCP server created by a developer can become the backbone of an organisation’s AI workflow. A small library can sit between a model and an internal data store. A startup with critical permissions can be acquired by an adversarial entity.

Kane summarises the operational risk:

“AI tools need access to customer support logs, uploads, private files… usually through one service account. If that’s compromised, the whole lot is accessible.”

While the challenges are serious, both experts also have solutions. As they emphasise, the most effective defences are often rooted in fundamentals that teams already understand.

Solutions

The problems outlined above are not signs that AI is uncontrollable or inherently unsafe. They are indicators that organisations need to establish patterns early. AI amplifies risk, but it also amplifies the value of clear thinking and disciplined engineering.

Here’s how organisations can begin closing the gap.

Solution #1. Treat non-human identities as a primary security domain

Service accounts must be managed with the same discipline as human identities.

 “If you’re doing the basics well — scoping tokens properly, monitoring access, limiting privileges — you end up in a much better spot,” Kane explains.  

What organisations can do now:

  • Maintain a full inventory of all AI-related tokens.

  • Limit each service account to the smallest set of permissions possible.

  • Enforce automated credential rotation.

  • Monitor behaviour for unusual access patterns.

  • Assign individual service accounts per integration rather than shared ones.

This work is methodical and necessary.

Once identities are controlled, the next step is preventing users from bypassing official controls altogether.

Solution #2. Provide secure, approved AI pathways early

Shadow AI emerges when employees lack alternatives. Dan stresses the need for ownership: “Own it. Think about it. The responsibility still remains yours.”

Kane explains that timing matters: “Once people get entrenched in workflows they like, it’s far harder to consolidate them later.”

Actions to take:

  • Publish clear guidance on approved tools.

  • Build internal “paved paths” for safe AI usage.

  • Offer pre-vetted integrations for coding, documentation, or data querying.

  • Create a review process for new tools and use cases.

With sanctioned AI tools in place, organisations can focus on maintaining the right boundaries for autonomy.

Solution #3. Reinstate human checkpoints where they matter most

Kane is firm on this:

“AI tools shouldn’t be able to make changes in production. Maybe read. But keep the guardrails firm.”

Actions organisations can take:

  • Require approval for any high-impact change or action.

  • Ban autonomous operations in production environments.

  • Build isolated sandboxes where engineers can safely experiment.

  • Use automated diff reviews to display AI-generated changes before execution.

Dan’s earlier aviation analogy underscores the principle: automation is powerful, but humans remain the final checkpoint for safety.

Well-reviewed internal operations won’t protect organisations if their external dependencies are unstable.

Solution #4. Strengthen supply-chain controls for AI components

AI supply chains require the same scrutiny as traditional software supply chains — arguably more.

 “You can’t rely on platform vendors to secure everything. Their focus is building better models, not securing the ecosystem,” Dan warns.

Actions organisations can take:

  • Maintain a catalogue of AI-related components and libraries.

  • Monitor for ownership changes in open-source packages.

  • Apply dependency scanning and signature verification.

  • Perform vendor-risk reviews on AI startups and platforms.

  • Require multi-person approval for introducing new AI dependencies.

AI will continue to grow inside companies mainly because it helps people work faster. That momentum won’t slow down. The role of security teams is to channel it, not chase it.

“Don’t let it run away from you,” is Dan’s advice. The role of security teams is to channel AI, not chase it.

Kane echoes the sentiment in practical terms: “Get ahead of it. If you wait, the workflows will solidify before you can secure them.”

Those who act early will set themselves up for safer, more effective adoption. Those who delay will find themselves unpicking a web of integrations they never knew existed.

But Kane also offers an important warning:

“If you don’t have strong MFA and you're dealing with password theft every day, don’t worry about AI yet.”

Get the basics right before adding more technology. 

Read more about The Spectrum of Cyber Security Prioritisation.  

Key takeaways

These points summarise what matters most in dealing with AI security and risk:

  • We all know that AI is already inside your organisation. Even if you haven’t formally adopted it, employees and vendors bring it in for you.

  • Service accounts are now among the most exposed identity types, and AI is pushing their numbers and privileges far beyond what most teams can track.

  • Shadow AI grows fastest when security doesn’t offer safe alternatives, leading to unmonitored workflows and unclear data flows.

  • Autonomy without review creates operational risk, especially when AI tools can act across systems with no human checkpoints.

  • The AI supply chain is fragile, often relying on small open-source components or early-stage vendors that inherit significant access.

  • Early design wins. Secure “paved paths,” disciplined identity management, and clear boundaries for autonomy are more effective than reactive controls.

If you need support building clarity around AI use cases and designing safe patterns your teams will actually use, Chaleit can help. We work with security leaders who want practical, grounded guidance that solves real problems.

Get in touch and let’s make AI safer, simpler, and genuinely useful for your organisation.

About the authors

Kane Narraway

Kane is a technical engineering manager with an unwavering passion for all things IT security. With over a decade of experience in building (and breaking) corporate networks, Kane dabbled in the realms of IT, red teaming, and DFIR before going on to lead the enterprise security teams at companies like Atlassian, Shopify, and now Canva.

Dan Haagman

Dedicated to strategic cyber security thinking and research, Dan Haagman is a global leader in the cyber domain — a CEO, client-facing CISO, Honorary Professor of Practice, and trusted advisor to some of the world’s most complex organisations.

Dan’s career began nearly 30 years ago at the London Stock Exchange, where he was part of the team that developed its first modern Security Operations Centre (SOC). He went on to co-found NotSoSecure and 7Safe, both acquired after helping shape the industry’s penetration testing and training practices.

His deepest commitment is to what followed: Chaleit — a company that has become Dan’s life’s work and passion. Founded not just to participate in the industry, but to elevate it, Chaleit brings together deep offensive testing capabilities and mature consulting, helping clients move from diagnosis to resolution. 

Today, he leads a globally distributed team across seven countries, steering Chaleit with principles of longevity and transparency, and guiding it toward a future public offering.

Dan is also the founder of the CISO Global Study — an open-source initiative created for the benefit of the broader industry. Through it, he works alongside hundreds of CISOs globally, distilling insight, exchanging learning, and challenging the assumptions that shape the field. 

Disclaimer

The views expressed in this article represent the personal insights and opinions of Dan Haagman and Kane Narraway. Dan Haagman's views also reflect the official stance of Chaleit, while Kane Narraway's views are his own and do not necessarily represent the official position of his organisation. Both authors share their perspectives to foster learning and promote open dialogue.

Make AI safer, simpler, and more useful

Build clarity around AI use cases and design safe patterns for your teams.

Get in touch

About this article

Series:

Cyber Strategy Collective

Topics:

  • Strategy

Related Insights

Vannessa van Beek AI security

Strategy

AI, Quietly Everywhere: A Guide to Building AI Security Frameworks

Pen testing with AI

Technical

Pen Testing with AI: A Shortcut or a Skill Multiplier?

AI security testing

Technical

AI Security Testing: New Attack Vectors and Strategies in Application Security

Portrait of Shana.

Strategy

Risk as Opportunity: From Avoidance to Strategic Exploitation

Your Cookie Preferences

We use cookies to improve your experience on this website. You may choose which types of cookies to allow and change your preferences at any time. Disabling cookies may impact your experience on this website. By clicking "Accept" you are agreeing to our privacy policy and terms of use.