TL;DR
Many organisations are spending time, money, and headspace on the wrong cyber security priorities. Tools designed to alert, report and visualise risk are unintentionally manipulating attention. As a result, decision-makers are overwhelmed by data, teams are stuck chasing alerts, and everyone is burning out under the weight of red bubbles and compliance deadlines.
Cyber security experts Dan Haagman (CEO, Chaleit) and Brock Maus (SVP, Information Technology, NAU Country Insurance Company) argue that instead of adding more dashboards and alerts, organisations should be slowing down, asking better questions, and working on the problems that actually matter.
The shift starts by breaking free from metrics-driven distraction and getting closer to the business.
Context: The distraction economy meets cyber security
We live in an age of cyber tools. Dashboards blink and alerts pile up, signalling risk and demanding action.
For example, a Security Operations Centre (SOC) team receives over 11,000 alerts daily, with almost a third being false positives, according to research. This constant barrage leads to alert fatigue, where analysts become desensitised, increasing the risk of missing real threats and causing delayed responses to attacks.
"The tools are built to engage you," says Brock. "They're designed like smartphones. That little red bubble gives you a dopamine hit. It says, 'Here’s something to worry about.' And it pulls you in."
This dopamine loop is well-documented in social media, where notifications exploit the brain’s reward system to drive compulsive checking. Similar psychological mechanisms are increasingly present in dashboards and alerting tools, pulling people’s attention to every new warning.
In cyber security, this loop breeds a culture of reaction without reflection. Teams don’t have time to think. Decisions are made to clear metrics, not to reduce risk.
"We buy a tool. That tool creates noise. So we hire engineers to manage the noise. Now we’re further away from the problem than when we started," Dan warns.
A study found that 69% of security teams dedicate more time to managing tools than to actual defence. And it’s no wonder, given that the average organisation works with 60 to 70 security tools, according to Gartner. This isn't just inefficient; it's dangerous, and it’s more common than most teams would like to admit.
Challenges
Challenge #1. Invented work
In many organisations, the implementation of a new platform subtly shifts strategic focus toward what the tool can measure or report, rather than what the business actually needs to protect.
"You think you're buying visibility," says Dan, "but what you're actually buying is a workload. The minute you install it, it creates tasks. And those tasks start to dominate planning and resources."
This creates what both experts term "invented work": tasks that exist solely because tools generate them, not because they address meaningful risks.
Brock adds that this trend displaces critical thinking.
"The tools tell you what to do before you've even had a chance to ask what problem you're solving. That’s the inversion. Instead of starting with risk, we start with features."
The outcome is that risk decisions get delegated to tool configuration screens, rather than being made in conversation with the business. This automation of attention dilutes strategic focus and, in some cases, creates dangerous blind spots.
Recent research confirms that overinvestment in new security tools — rather than risk-driven decisions — has resulted in complex, sprawling tool stacks that are difficult to manage and optimise. In addition, a staggering 70% of CISOs admit that their existing security tools are not as effective as they could be.
Naturally, this does not help with security team efficiency and motivation.
Challenge #2. Saturated and disconnected teams
Even as cyber teams grow, their effectiveness often stalls. New hires are brought in not to solve problems, but to manage tool-generated output: alerts, tickets, and dashboards. It creates the illusion of maturity, while widening the gap between detection and action.
"You hire engineers to tune the tool, analysts to triage alerts, and managers to track metrics," Dan points out. "But no one’s closer to the actual problem."
Brock explains how this leads to internal competition for resources. "We’re pulling teams into cyber backlogs that compete with actual business deliverables. App teams, data teams—they all get cyber tasks that have nothing to do with their goals."
Consider that around 35% of cyber security professionals report burnout due to alert overload and work pressure, according to 2025 cyber security workforce research. Moreover, approximately 83% of SOCs experience annual staff attrition, with burnout cited as a major cause.
Instead of building resilience, teams feel overwhelmed. Cyber becomes the group that gives people more work, not more protection.

Challenge #3: Invisible red bubbles
While teams focus intensively on visible metrics and tool-generated alerts, the most dangerous vulnerabilities often remain invisible. Misconfigurations, exposed credentials, and basic authentication failures — the actual attack vectors used in most successful breaches — frequently exist outside the scope of expensive security programmes.
"There'll be invisible red bubbles that we don't see, and the tools haven't picked up," warns Dan. "And that bubble may not be a vulnerability. It may be a misconfiguration. And that's how people are getting hacked."
Brock concurs: "Misconfiguration, end users clicking on phishing emails — it's worse than anything that exists out there."
This creates a paradox: the more sophisticated the security tool set, the more likely teams are to miss simple but critical vulnerabilities. Organisations end up with comprehensive coverage of theoretical risks whilst remaining exposed to practical threats.
Dan offers a telling example. A major financial company, despite spending millions on pen testing and using an ASM tool, had OAuth2 keys exposed on Postman. "Their ASM tool didn't find their OAuth2 keys," Dan explains. "It took the Chaleit team about three hours, not 15 days, to discover them." Meanwhile, the organisation's expensive security program continued focusing on traditional vulnerability scans and compliance activities.
Consider another example of this disconnect: a 100-day-plus security program was swiftly superseded by a straightforward, six-day fix involving an Okta configuration checkbox. "Without authentication, there is no cyber security," Dan observes.
He explains Chaleit’s proactive philosophy in this context: "Most firms would sit there and say, 'There's your problem, here's the report, thank you very much.' But we had to say, 'Here's your problem, now get us to the team, we'll figure it out.'"
Challenge #4. Compliance overreaction
Regulatory requirements, while well-intentioned, often trigger disproportionate responses that consume resources while providing minimal additional protection. Rather than implementing reasonable, proportionate measures, organisations create elaborate programs that far exceed actual requirements.
“We’re seeing a wave of new regulations, NYDFS, CCPA, APRA, GDPR, and while each has merit, the real challenge isn't the regulations themselves. It’s how we, more broadly as an industry and society, tend to respond. There's often a tendency to overcorrect, turning reasonable compliance into an operational burden. We live in an age of overreaction, where the fear of falling short can sometimes overshadow the actual requirements.”
This turns practical guidance into performance theatre. The result is late-year bottlenecks and rushed reports that satisfy auditors, not stakeholders. This year-end scramble creates exactly the wrong incentives. Instead of integrating compliance into ongoing business operations, it becomes a separate, resource-intensive activity that competes with strategic priorities.
"A lot of these regulations are not unreasonable," clarifies Brock. "They say to do an annual pen test, have a vulnerability management system, and do a third-party review of your vendors. They don’t say, 'Do a 300-question background check and scan every line of code.' But that's what people do."
The overreaction is amplified by tool vendors, who benefit from comprehensive implementations, and by internal teams, who prefer to overcomply rather than risk regulatory issues. The result is security programs that consume disproportionate resources while creating minimal additional protection.
Speaking of vendors, we turn to the last challenge.
Challenge #5. Transactional vendors and the wrong kind of help
Many vendors sell to meet demand, not to solve problems. "They give you what you say you want," Brock observes, "not what you actually need."
Dan recounts a typical dynamic: "A client specifies a tool or test. The vendor responds with a quote. Everyone’s trying to hit the deliverable. But no one’s asking the most important question: Why are we doing this in the first place?"
Brock shares a positive counterexample. His team approached a vendor for full code scanning. The vendor responded: Why? They then proposed a more effective, faster and cheaper solution using white-box and black-box testing instead of exhaustive scanning. "That’s a partner," Brock pointed out. "Not a vendor."
Partnership is part of the solution to fixing the challenges above. Based on their extensive experience, Brock and Dan offer practical insights and lessons.
Solutions
Solution #1. Problem-first security design
The first step to fixing the overreliance on security tools is flipping the sequence. Don’t start with what the tool can do. Start with what you actually need.
"We need to ask: what's the problem we’re trying to solve?" Brock explains. "Because if you’re just buying another tool, you’re probably creating more work, not solving the issue."
This requires establishing clear business requirements and understanding exactly what visibility or capability gap you're trying to fill. Too often, organisations buy tools that create more noise rather than addressing genuine business risks.
"Where's the context of that bubble?" asks Dan. "Where's the next level of triage that says, 'Let's work that problem, let's put that in terms of what the business wants'? Where's the risk context? What's the hypothesis around that red bubble?"
Practical implementation involves:
Mapping critical business processes and identifying genuine failure points rather than starting with tool capabilities.
Implementing "quick look" assessments that provide rapid risk validation without comprehensive analysis.
Designing security programmes around business workflows rather than tool architectures.
If you can't clearly explain what business problem the tool solves and what specific gap it fills, you're likely falling back into the red bubble trap.

Solution #2. Come off the treadmill
Both experts agree that security metrics should support clarity, not distraction.
The first step is deliberately creating "space to think": structured periods where teams step back from tool dashboards to assess whether current activities align with genuine business risks.
"Space to think doesn't mean sitting by yourself in a room with the lights off," clarifies Brock. "Space to think is brainstorming. Bringing your stories together. Whiteboarding."
This requires implementing what might be called a "why protocol" — every security initiative must clearly articulate the business problem it solves.
Practical implementation involves:
Establishing regular periods where teams assess whether current activities align with genuine business risks.
Creating decision matrices that prioritise based on potential business impact rather than tool-generated urgency.
Implementing regular reviews where teams can question existing assumptions about priorities.
"It takes courage to come off this treadmill," acknowledges Dan. "It takes time to make time, and a very brave person to say we're going to slow this down," Brock adds.
Solution #3: Active discovery over passive monitoring
The most dangerous vulnerabilities are often the ones your tools can't see. The solution isn't better tools, but it's active hunting for invisible risks using human curiosity and targeted investigation.
This type of discovery requires asking different questions:
Where do developers accidentally expose credentials?
What authentication assumptions might be wrong?
Which configurations look secure but aren't?
What would an insider with basic access actually see?
Instead of following tool outputs, start with how organisations actually get compromised:
Stolen credentials
Social engineering
Misconfigurations
Unpatched systems
"That's how you prioritise," explains Brock. "Focus on what actually gets you hacked, not what generates the biggest numbers."
This means:
Understanding critical business processes first
Identifying single points of failure in those processes
Testing assumptions about security controls
Validating that theoretical protections work in practice
The goal isn't comprehensive coverage, it's targeted discovery of high-impact vulnerabilities that would otherwise remain invisible until they're exploited.
This doesn't mean abandoning technology. It means using it more strategically.
Active scanning tools, when directed by human hypothesis rather than automated schedules, become far more effective. Red team and purple team exercises that focus on specific business processes can uncover misconfigurations and authentication failures that passive monitoring may miss.
The key difference is intent: instead of running scans and hoping to find something, you're actively testing specific assumptions about how your security controls work in practice.
Active discovery transforms security from reactive tool management into proactive risk hunting. It's the difference between waiting for alerts and actively looking for trouble before it finds you.
Solution #4. Spread out the compliance load
Rather than treat pen testing or compliance as a once-a-year event, teams can break it into smaller efforts over time.
"Stacking everything into Q4 just burns people out," Dan explains. "Instead, we can spread the validation across the year, drip-feed issues to teams, and reduce pressure."
For more information on how to achieve effective security testing, read our guide to penetration testing.
A year-long approach still meets regulatory requirements but does so in a way that respects business operations. It also builds trust.
Practical implementation involves:
Breaking annual requirements into quarterly or monthly activities that integrate with normal business operations.
Focusing compliance efforts on validating existing controls rather than creating new bureaucratic processes.
Establishing clear boundaries between regulatory minimums and voluntary security enhancements.
Brock highlights the long-term positive effect of this approach:
"When teams feel supported, they’re more likely to collaborate. That’s when real security progress happens."

Solution #5. Invest in partners, not products
A good partner challenges assumptions. They ask better questions. They help teams define success.
"When a vendor tells you what you shouldn’t do, that’s how you know they’re on your side," says Brock. "They’re thinking long-term."
Dan suggests reworking the typical buying process: "Instead of specifying a scope and asking for a quote, write the scope together. Make it about the outcome, not the input."
This partner-driven collaboration requires:
Engaging security providers who ask "what problem are you trying to solve?" rather than "what tools do you need?".
Implementing collaborative scoping where vendors and clients jointly design programmes based on risk assessment.
Establishing success metrics based on business outcomes rather than tool performance indicators.
The partnership approach reduces waste, strengthens accountability, and results in better security decisions.
Key takeaways
Don’t let tools define your priorities. Start with the problem, not the platform.
Red bubbles are distractions unless backed by context and clarity.
Compliance should be integrated into business-as-usual, not stacked at the end of the year.
A true partner helps you think, not just deliver.
Security teams need space to think. Rationalise. Subtract. Simplify.
If it’s not helping reduce risk or enabling the business, it’s probably noise.
This essay isn’t an argument against tools, dashboards, or compliance. It’s a call to remember why those things exist in the first place.
If organisations are serious about reducing cyber risk, the first step might be to stop, think, and ask what really matters. We can assist in this process as it’s part of our work ethic. Drop us a line.

About the authors
Brock Maus
Brock Maus is an accomplished SVP of Information Technology with over 20 years of experience in IT leadership roles within the insurance and agricultural sectors. Brock has a proven track record of driving digital transformation, system modernisation, and technology innovation to enhance business outcomes and customer engagement.
Brock has led diverse and high-performing teams at NAU Country Insurance Company, where he oversees the technology solutions and offerings. His leadership has been instrumental in the successful implementation of various technology projects, including the integration of advanced analytics tools and the deployment of industry-leading technology solutions. Brock has also spearheaded some of the first applications of drones, satellite imagery, optical character recognition (OCR), and artificial intelligence (AI) within the organisation. His expertise extends to areas such as cybersecurity, data management, and IT governance, ensuring compliance with industry standards and regulations.
Throughout his career, Brock has demonstrated a commitment to excellence and a passion for leveraging technology to solve complex problems. He has played a key role in the development and deployment of cutting-edge solutions that have significantly improved business processes and outcomes. Brock is known for his strategic thinking, collaborative approach, and ability to foster a culture of continuous improvement and innovation within his team.
Brock has been recognised for his contributions to the industry, including his role in driving NAU Country's digital transformation and his active participation in industry forums and conferences. His leadership and vision have been valuable to the organisations he has served, and he continues to provide strategic guidance and support to ensure their long-term success.
Dan Haagman
Dedicated to strategic cyber security thinking and research, Dan Haagman is the CEO and founder of Chaleit, a seasoned leader in global cyber security consulting, and an Honorary Professor of Practice at Murdoch University in Perth, Australia.
With nearly 30 years of experience, he began his journey at The London Stock Exchange, where he pioneered the development of their first modern SOC and defensive team. As a co-founder of NotSoSecure and 7Safe, both acquired by reputable firms, Dan has left a lasting impact on the industry.
Today, Dan leads a team of brilliant minds in seven countries, all focused on delivering world-class cyber security consulting. Chaleit reflects Dan's vision for the industry's future. Built on the core principles of longevity and transparency, the company is poised for a public offering within the next few years.
Dan has a passion for learning. With a pen and paper at hand, he dedicates significant time to reading, researching, designing systems, and learning with clients and peers with the goal of being a leading thinker and collaborator in the cyber industry.
Disclaimer
The views expressed in this article represent the personal insights and opinions of Dan Haagman and Brock Maus. Dan Haagman's views also reflect the official stance of Chaleit, while Brock Maus's views are his own and do not necessarily represent the official position of his organisation. Both authors share their perspectives to foster learning and promote open dialogue.