Matt Foster: Cybersecurity isn’t a tick-box exercise

Reading Time:

What do you do in the first 15 minutes of a cybersecurity incident? Who keeps track of what’s happening, and how do you minimise damage? Do executives have the information needed to make vital decisions?

These are questions to consider ahead of time, not during a crisis, explains Matt Foster, Head of Information & Cyber Security at Lookers, who talked to Dan Haagman, CEO of Chaleit, about how he approaches the complexities of cybersecurity in a multi-billion pound company.

Read on for insights on gaining buy-in from stakeholders and maximising security.

A cyber crisis playbook

Cyber stays at the top of our risk rating internally as part of our enterprise risk framework. But the days of us being able to dashboard up theoretical control maturity are ending. It’s not a paper-based exercise anymore. It’s not good enough to bring the auditors in, whether external or internal, and tick boxes.

Security isn’t a tick-box exercise. We must test controls in the real world with an increasing lack of oversight from the blue team or the defenders.

Speed of response is critical. If you look at how crisis management as a general function operates, it’s typically by committee. But we don’t have that luxury under a significant cyber event.

When a crisis happens, it is chaotic at best. The actions we take as cyber professionals impact revenue and the business's overall health. The execs need to be part of it.

At Lookers, we introduced a playbook for the executives. What’s their role when we have a cyber crisis? Recently, we ran a role-playing exercise to get them accustomed to the decisions they must make. No plan survives the first engagement with the enemy, but a plan that’s at least been through one iteration can survive the real world.

When a crisis happens, it is chaotic at best. The actions we take as cyber professionals impact revenue and the business’s overall health. The execs need to be part of it.

Inevitably, when an incident happens, you’re all over the place until somebody in the room says, we’ve got a plan. A cyber incident is never a complete information situation. But the more information execs get quicker, the better their chance of making existential decisions.

We’re that critical friend of the execs who feeds them the information they need. Somebody’s got to be there to track actions and keep the information flow going.

It’s not just the executives who must be flexible, agile, and educated. It is the whole workforce.

You must have a set of deputies set up. And unless you’re in a giant multinational organisation, there will be a crossover. You just don’t have that depth in the senior experts in the organisation. So, you need to rehearse and cross-skill.

It's not just the executives who must be flexible, agile, and educated. It is the whole workforce.

The next time you run a simulation, allow for burnout. The A-team will probably not be available when the incident starts. Thirty hours into it, that A-team will need to sleep. Think through the transition process. You don’t need three Matt Fosters in the organisation, but people need minimum competence and understanding across functions.

Pre-canning strategic decisions

Why do we do crisis management and simulations? Because the decisions are tough, and making them under pressure is impossible.

During exercises, pre-can as many decisions as you can. Not death by a decision tree, but pre-thinking those strategic assumptions: whether to pay ransoms or not; once exfiltration has been proven, what do you do then?

Why do we do crisis management and simulations? Because the decisions are tough, and making them under pressure is impossible.

Pre-canning decisions will remove some of the need for expertise in the room.

When a crisis hits, the red traits in our personalities come out and cause snap decisions. But has anyone checked what’s actually happened? Who’s following up on it and keeping a record?

An internal audit team, if you’re lucky, or your regulators will want to investigate the incident at some point. You can’t remember what happened 30 minutes ago in pressure events like this. You must have records facilitating handovers as the teams rotate, making that future investigation less traumatic.

Pre-canning decisions will remove some of the need for expertise in the room.

A continuous model for cybersecurity

We need a more continuous approach to our control assurance based on the changing threat landscape. Continuous doesn’t mean we do it every month or as an annual exercise. It means that as the risk profile changes, our control effectiveness changes.

Generally, it’s good enough if we’re tank-proof in our physical security. We can wait a couple of years before we re-evaluate that. If we’re looking at how vulnerable our staff is to phishing, we must re-evaluate that continuously.

Based on our threat intel and how we see the risk horizon moving, we assess the control areas impacted more frequently. We provide a non-theoretical level of assurance. We engage through purple teaming, having a pop at ourselves.

We need a more continuous approach to our control assurance based on the changing threat landscape.

But, increasingly, the unannounced red teaming element must be there. It’s far too easy when you’re running a purple event internally. Inevitably, the team playing defence is on high alert. Equally, if you get spray gunned with a relatively targeted attack, you reach a higher alert level.

It’s about the security professionals and the human elements in the chain vulnerable to social engineering.

Familiarity breeds contempt. It’s not just about how we put warnings and detections in front of people, but it’s about how we test them and keep them fresh and current.

If people think threats are real, why do they make risky decisions? Because nothing bad’s ever happened to them personally. Keeping it fresh in their minds makes our controls much more effective.

Familiarity breeds contempt. It's not just about how we put warnings and detections in front of people, but it's about how we test them and keep them fresh and current.

Incentivising the right behaviours

Sometimes, subconsciously, we measure the wrong behaviour.

For example, who doesn’t incentivise their IT service desk in terms of average answer speed? We measure them on contact resolution. That’s what they’re supposed to do and what they get rewarded for. Generally, it’s a good idea. Who wants to bother tech teams with stuff that we can shift left to the service desk?

So we’ve incentivised fixing it whilst the person is on the phone. That means doing it now without validating it. And it works until it’s a social engineering attack.

We had a recent spike of bold social attacks, not just against IT but also HR. As a result, we have already changed the metrics on which we incentivise our service desk. We pulled any security-related tickets out of the first contact resolution KPI.

Let’s incentivise the behaviour we want to happen.

Think you need to convert your security from a tick-box exercise?

  • This field is for validation purposes and should be left unchanged.

Recent Posts