Skip to NavigationSkip to Content

30 Mar 2026

readStrategy

12 min reading time

The Six Layers of Responsible AI Adoption

Samrat Seal

TL;DR

Artificial intelligence is being adopted at a rapid pace. Boards want it, business units are experimenting with it, and vendors are packaging it attractively. But speed without structure creates fragility.

In a valuable collaboration, Samrat Seal (Head of Transformation and Governance - Cyber & AI Platforms at Kmart Australia Limited) and Dan Haagman (CEO of Chaleit, Founder of the CISO Global Study) argue that most organisations are treating AI as a shortcut rather than an engineered capability. This results in vague strategies, weak data controls, hidden consumption costs, and security risks that surface too late.

Their shared view is that AI must be designed as enterprise infrastructure. That means clear commercial intent, disciplined data governance, cost visibility, and embedded security thinking from day zero. Otherwise, the same pattern seen in early cloud adoption will repeat — only faster.

Context: AI enthusiasm outpacing readiness

AI is now part of executive conversation. In many organisations, business teams are pushing use cases forward with urgency. The barrier to entry feels low. A licence can be provisioned in minutes while a prototype can be built in days.

Dan has seen the pattern repeatedly across sectors: security and transformation leaders are pulled in at the last moment.

Samrat echoes this with a scenario in which a business unit approaches security two weeks before production, requesting adversarial testing and a risk assessment for an already-built AI solution. In his view, that situation signals a structural failure. If security and governance are only engaged at the end, the process itself has been designed incorrectly.

Enthusiasm for AI is understandable. What creates risk is the order in which decisions are made.

When governance, architecture, and risk assessment come last, they become obstacles rather than design inputs. That sequencing sets up tension, shortcuts, and, eventually, consequences.

AI is often treated as an application layer. In reality, it sits across data, identity, infrastructure, finance, and governance. Without design discipline, organisations create what Dan describes as “the law of unanticipated consequences” — systems that appear to work until cost, security, or operational strain reveals hidden weakness.

Industry data reinforces this concern. MIT's State of AI in Business 2025 report states that as many as 95% of enterprise AI pilots fail to reach production, delivering zero ROI.

To understand why, we need to look at the structural challenges beneath the excitement.

Challenges

The risks around AI adoption come from how organisations design, sequence, and fund their initiatives. In this discussion, four weaknesses stand out: unclear intent, fragile data foundations, hidden consumption costs, and security treated as a late-stage review. Each one compounds the others.

Challenge #1. Jumping to solutions

One of the first issues both experts highlight is the tendency to start with tools rather than intent.

“If you ask one leader, do you want to do AI? Yes,” Samrat explains. “What do you want to get out of it? What are your top two or top five use cases? They stumble.”

The instinct is to default to familiar patterns: chatbots, AI-assisted coding, automated analysis. “They’ve jumped to a solution, not an outcome,” Dan observes.  

This mirrors earlier vendor-driven cycles in cyber security and cloud adoption. A concern emerges — phishing, cost, automation — and the organisation buys a tool before clarifying what success looks like.

Samrat explains that AI strategy must align with business objectives: revenue uplift, EBIT impact, and time-to-market acceleration. “You need to put proper measuring criteria for success. “What is the definition of done?” he says.

Without that clarity, initiatives drift, prototypes proliferate, and executive confidence declines.

This lack of defined intent feeds into the next structural weakness.

Challenge#2. Weak data foundations and identity blind spots

For Samrat, the most underestimated AI risk sits beneath the model layer. He offers an analogy to explain it:

“The engine on your Ferrari is not going to run if you put bad oil into it. That oil is your data.”

AI systems amplify whatever they are fed. If data is poorly classified, incomplete, biased, or loosely governed, the output reflects that instability.

According to the experts, AI risk is associated with two dominant themes: data exfiltration and hallucination and bias. Both trace back to governance and quality.

 “Erosion and hallucination aren’t necessarily inherent in the AI itself. It’s in what you put in,” Dan clarifies.  

An AI system trained on poorly labelled internal data can produce confident but flawed recommendations. If those outputs feed automated decisions — credit approvals, pricing adjustments, hiring filters — the impact compounds.

 “If you do not have in your organisation a data classification policy, you are failing right there,” Samrat says.

Even documented policies are insufficient unless implemented across structured and unstructured datasets, supported by labelling, monitoring, and non-human identity controls.

Studies show significant risk concentration in data governance, where misconfigurations in data stores and identity gaps create high-impact vulnerabilities exploited in most breaches. Misconfigured cloud storage affects 93% of deployments, leading to hundreds of breaches and billions of exposed records in recent years, according to an industry survey. Constella Intelligence’s 2025 Identity Breach Report found that credential theft drives most breaches, with 4.3 billion unique emails exposed in 2024 — a 58% YoY increase.

As AI systems integrate with internal data lakes and warehouse layers, identity management becomes pressing. Non-human identities — service accounts, API tokens, automated agents — often outnumber human users in modern cloud estates. Without strong controls, AI becomes an access amplifier.

This foundation problem connects directly to financial risk.

Challenge #3. AI’s financial shock

Cloud adoption delivered a painful lesson: consumption-based models punish poor visibility.

Samrat sees the same trajectory forming in AI:

“License is peanuts in AI. The thing that is going to cost us is the consumption.”

Many organisations fixate on discounted licences while underestimating token usage, model complexity, and compute intensity. But few teams understand the burn rate.

Samrat draws a parallel with early cloud enthusiasm. Hyperscaler pricing models took years for organisations to properly master. FinOps emerged as a discipline only after overspend forced attention.

The same pattern could repeat in AI. High-frequency inference workloads, multimodal processing, and large-scale training cycles carry substantial compute demands. Without strong cost observability, departments may scale usage far beyond the forecast.

Industry analysts from Gartner, IDC, and Goldman Sachs predict explosive growth in AI infrastructure spending through 2029, driven by hyperscaler capex and enterprise AI demands.

Financial opacity increases pressure on security and governance functions, leading to the final challenge.

Challenge #4. Security as an overlay function

Security teams are often positioned as reviewers at the end of delivery cycles. “If that happens right at that point, the process is broken,” Samrat reiterates when describing last-minute AI risk assessments.

Dan uses an engineering metaphor. You would not build a steam train or a 50-storey building without drawings. AI deployments are frequently assembled without equivalent architectural rigour.

Late-stage review creates friction. It also creates blind spots.

Dan introduces a less-discussed threat model: gradual data erosion. Rather than a dramatic breach, malicious actors could manipulate data subtly, degrading the integrity of decisions over time. AI systems accelerate the speed at which corrupted input influences output.

Samrat widens the lens.

“The war of the future is not going to be fought using cannons. It will be on the laptops, on the servers, and on our data.”

Security leadership must therefore extend beyond tooling and into enterprise design thinking. Samrat makes a bold prediction: the CISO role will not exist in its current form. Security, he argues, must be federated and embedded across business units.

Without that integration, AI intensifies rather than resolves systemic weakness.

These challenges are interconnected. Strategy ambiguity drives rushed execution. Weak data foundations amplify model risk. Financial blind spots increase executive pressure. Late-stage security review compounds fragility.

The response must be equally structured.

Framework for safe AI adoption

Addressing these issues requires architectural thinking, commercial discipline, and embedded governance. Both experts offer practical steps that turn AI from experimentation into an engineered capability.

More specifically, Samrat has designed a six-layer framework for the safe adoption of AI across the enterprise. Each layer addresses a specific failure pattern observed in rushed implementations:

  1. Enterprise AI Strategy and Business Objective Alignment

  2. Data Foundation and Governance

  3. AI Governance and Guardrails

  4. Secure Cloud Landing Zone

  5. Enterprise AI Platform (as a Service)

  6. AI FinOps, Lifecycle Management, and Observability

Let’s look at each in more detail.

Layer #1. Enterprise AI Strategy and Business Objective Alignment

The first layer corrects the most common error: starting with tools instead of intent.

AI strategy must be grounded in organisational context. Industry frameworks are useful, but they cannot replace internal commercial reality. AI initiatives must link directly to measurable business objectives — revenue growth, cost efficiency, EBIT impact, speed to market.

Samrat insists that the strategy must include a “definition of done.” That means:

  • Clear performance baselines

  • Quantified expected benefit

  • Defined ownership

  • Commercial metrics agreed upfront

Dan reinforces this by observing that many leaders jump to a solution before clarifying the outcome. When that happens, governance becomes reactive, and demonstrating value becomes difficult.

Practically, this layer requires:

  • Executive alignment before the technical build begins

  • Financial modelling of expected return

  • Clear documentation of business hypotheses

  • Formal review gates tied to metrics

Without this, later layers carry structural ambiguity.

Layer #2. Data Foundation and Governance

The second layer addresses the most underestimated AI risk: data integrity.

AI systems amplify whatever they ingest. If data classification is weak, lineage unclear, or quality inconsistent, the output reflects those weaknesses at scale. Also, poor data can cause a gradual degradation of decision integrity.

This layer prevents AI from amplifying pre-existing fragility and it requires:

  • Implemented data classification policies

  • Labelling of structured and unstructured datasets

  • Clear ownership and stewardship

  • Data lineage tracking

  • Quality validation mechanisms

  • Non-human identity governance

Layer #3. AI Governance and Guardrails

The third layer formalises principles and oversight.

Samrat keeps the principle set intentionally small: ethical AI, responsible AI, secure AI, and explainable AI.

Explainability is particularly important. Systems must not become “black magic.” Teams need to understand how outputs are produced and how to interrogate them.

This layer includes:

  • Defined AI principles endorsed at the executive level

  • Policy frameworks tied to those principles

  • Risk and opportunity assessment processes

  • Use-case approval workflows

  • Clear accountability models

This addresses the challenge of late-stage security review. When guardrails are defined early, security is embedded rather than appended.

Layer #4. Secure Cloud Landing Zone

AI workloads sit on infrastructure. That infrastructure must inherit a secure design.

Samrat highlights the need for a hardened cloud landing zone, where:

  • Security controls are templated

  • Identity and access management are enforced

  • Network segmentation is predefined

  • Logging and monitoring are the default

Every tenancy or account created above that landing zone inherits baseline protections.

This corrects a recurring cloud-era mistake: building first, retrofitting controls later.

You would not build a multi-storey structure without understanding load distribution. AI infrastructure demands similar discipline.

Layer #5. Enterprise AI Platform (as a Service)

Rather than allowing each business unit to procure and configure AI independently, Samrat proposes a unified internal AI platform. He describes it as offering AI capability “as a service” to the rest of the organisation. This platform includes:

  • Pre-approved data connectors

  • Standardised build pipelines

  • Observability built in

  • Multimodal registry management

  • Secure model deployment patterns

The goal is enablement without fragmentation. Business owners can configure use cases and move to market faster because governance, infrastructure, and controls are already baked in.

This layer addresses both speed and safety. It reduces duplication, improves consistency, and lowers friction.

Layer #6. AI FinOps, Lifecycle, and Observability

The final layer is designed to prevent a repeat of early cloud overspend, where enthusiasm outpaced cost visibility and consumption quickly exceeded forecast. Without cost visibility, enthusiasm quickly turns into a budget shock.

Efficiency gains must be measured in time saved and capital allocated.

AI FinOps includes:

  • Compute usage monitoring

  • Token consumption tracking

  • Budget forecasting models

  • Chargeback mechanisms

  • Lifecycle management for models

  • Continuous performance review

This layer ensures AI remains economically sustainable.

Key takeaways

Taken together, the discussion highlights several practical principles worth keeping in mind.

  1. AI enthusiasm must be matched with commercial clarity.

  2. Data governance determines the reliability and security of AI.

  3. Consumption costs demand FinOps discipline from day one.

  4. Identity and access controls are central to AI risk management.

  5. Security must be embedded in enterprise design.

  6. Early discipline prevents a delayed crisis.

The pressure to adopt AI will not slow down. What remains within an organisation's control is how deliberately it is designed, governed, and funded.

Access to technology is not the issue. The difference is made in sequencing, discipline, and architectural clarity. When intent is defined early, data foundations are sound, costs are visible, and security is embedded in design, AI becomes manageable.

For organisations serious about building AI capability that is commercially sound and defensible from a cyber security perspective, the work begins long before deployment.

At Chaleit, these are exactly the conversations we are having with leaders who want to get the structure right before scale begins.

About the authors

Samrat Seal

Samrat Seal is Head of Transformation and Governance (AI and Cyber Platforms) at Kmart Australia Ltd., where he sits at the intersection of strategic innovation and responsible technology adoption. With a mandate that spans artificial intelligence, cybersecurity, and emerging technology platforms, Samrat drives the organisation's ambition to harness emerging technology not just for operational efficiency, but as a genuine source of competitive advantage. His work shapes how thousands of people across the business interact with AI — defining the guardrails, the enablement frameworks, and the vision that make transformation both bold and trustworthy.

A recognised technology thought leader, with more than 23 years of experience in various technology leadership roles, Samrat brings a rare ability to bridge the technical and the strategic business outcomes — translating complex technology landscapes into clear, actionable direction for boards, executives, and frontline teams alike. He is particularly focused on building emerging platforms & capabilities and associated governance challenges that define this moment in enterprise AI: how organisations can move quickly without moving recklessly, how they build cultures of experimentation alongside cultures of accountability, and how the right platforms — deployed with intention — become multipliers of human potential rather than replacements for it.

Samrat writes and speaks on the future of enterprise technology landscape, with a focus on AI strategy, platform governance, and the evolving relationship between people and intelligent systems. He believes the organisations that will lead the next decade are not simply those that adopt AI earliest, but those that adopt it most thoughtfully — with clarity of purpose, strength of governance, and an unrelenting focus on outcomes that matter.

Dan Haagman

Dedicated to strategic cyber security thinking and research, Dan Haagman is a global leader in the cyber domain — a CEO, client-facing CISO, Honorary Professor of Practice, and trusted advisor to some of the world’s most complex organisations.

Dan’s career began nearly 30 years ago at the London Stock Exchange, where he was part of the team that developed its first modern Security Operations Centre (SOC). He went on to co-found NotSoSecure and 7Safe, both acquired after helping shape the industry’s penetration testing and training practices.

His deepest commitment is to what followed: Chaleit — a company that has become Dan’s life’s work and passion. Founded not just to participate in the industry, but to elevate it, Chaleit brings together deep offensive testing capabilities and mature consulting, helping clients move from diagnosis to resolution. 

Today, he leads a globally distributed team across seven countries, steering Chaleit with principles of longevity and transparency, and guiding it toward a future public offering.

Dan is also the founder of the CISO Global Study — an open-source initiative created for the benefit of the broader industry. Through it, he works alongside hundreds of CISOs globally, distilling insight, exchanging learning, and challenging the assumptions that shape the field. 

Disclaimer

The views expressed in this article represent the personal insights and opinions of Dan Haagman and Samrat Seal. Dan Haagman's views also reflect the official stance of Chaleit, while Samrat Seal's views are his own and do not necessarily represent the official position of his organisation. Both authors share their perspectives to foster learning and promote open dialogue. 

Secure AI properly

Design AI with the right controls, architecture and oversight from day one.

Start the conversation

About this article

Series:

Cyber Strategy Collective

Topics:

  • Strategy

Related Insights

jonar marzan

Strategy

AI and the Illusion of Control

Kane Narraway Canva cyber security AI

Strategy

AI: The New Superuser in Your Organisation

Vannessa van Beek AI security

Strategy

AI Chaos Engineering

AI security testing

Technical

AI Security Testing: New Attack Vectors and Strategies in Application Security

Your Cookie Preferences

We use cookies to improve your experience on this website. You may choose which types of cookies to allow and change your preferences at any time. Disabling cookies may impact your experience on this website. By clicking "Accept" you are agreeing to our privacy policy and terms of use.