Long said in movies and TV, it's the bullet that you don’t see that kills you. This was true for Target in 2011 where they missed the alerts their tools generated. This has remained so for many others ever since.
Pick an organization with 1000’s of employees. Look at the IT/Sec-Ops people as they really are. You’ll see cyber alerts fatigue that is driving up employee attrition in a landscape where skills gaps are large and recruiting costs are increasing. Worse, the enemies are still storming the enterprise.
Big changes are needed, as well as an environment conducive to them. And these problems are actually more human than technical. Adding this up, the C-Suite must intervene.
Is Big Data Analytics Big Enough, or Making Alert Fatigue Worse?
Alert fatigue is not new. The promise of Big Data and Machine Learning is. The concept is simple: reduce mountains of alerts to molehills. But, is the promise meeting reality?
Let’s look at some statistics:
- FireEye survey of C-level security executives at large enterprises regarding alerts: 52% are false positives and 64% are redundant; 40% of the organizations manually investigate each
- Endpoint alerts: 45% are considered reliable, 19% are investigated; an average of 1,156 hours is spent per week on detecting and containing insecure endpoints (Ponemon, “The Cost of Insecure Endpoints”, June 2017)
- Cisco 2017 Security Capabilities Benchmark Study regarding alerts: 28% are deemed legitimate, 44% of surveyed organizations see more than 5000 alerts per day, can only investigate 56% of alerts per day
Keep an Eye on Employee Attrition and Start Treating the Symptoms
Per a study by Kronos and Future Workplace, 95% of HR leaders say employee burnout is torpedoing employee retention. Further, 46% of them attribute burnout to up to half of their observed attrition. The larger the organization, the worse the problem. They cited ‘unreasonable workloads’ and too much overtime as leading causes. While 97% of them plan increased investments in recruiting technology, budget remains an obstacle to funding employee retention programs.
The cyber skills gap is too great and the recovery times for lost employee investments are too great to accept high attrition rates. Executives need to shore up employee retention before and while they attack the root causes of the alerts.
Know what is Driving Alerts Volume
Per a Rapid7 report, alerts volume corresponds to employee interaction activities. When they are active, they interact with malicious emails/attachments, web browsing, opening weaponized documents, etc. These drive not just alerts from endpoint sensors. They also drive alerts from network sensors, which are downstream from these interactions.
Reduce Alert Volume at the Source with Preventative Measures
There are two acceptable ways to reduce alerts at the source. First, implement a continuous employee cyber readiness program. Many organizations mistake this for a security awareness program. The former deals with the individual employee in terms of training, cyber exercises (e.g., test phish), and risk scoring. Mature security practices already score risk per system or asset, why not individual employees? You can’t fix what you don’t measure.
The second approach overlaps with and compensates for failures of the first. Deploy tools that PREVENT endpoint compromises regardless of what employees do or do not do. Seek tools that compensate for cyber hygiene failures, which are chronic and pervasive. Avoid tools that depend on a human-in-the-loop to prevent or curtail endpoint compromises. In general, stress the overall lifecycle operations costs and skills gaps of not just the endpoint tool but of all those downstream from it.
Endpoint Security Tools should Reduce not Drive Labor Intensity
Beyond the hype of machine learning magically solving cybersecurity problems, too many organizations are investing in endpoint tools that actually drive labor costs even higher. Worse, they worsen an already severe cybersecurity skills gap.
The catastrophic failures of traditional AV have created a false paradigm that numerous endpoint compromises are normal and unavoidable.
“Ineffective endpoint security strategies are costing organizations $6 million in detection, response, and wasted time.” (Ponemon, “The Cost of Insecure Endpoints”, June 2017)
This has led to adoption of post-compromise tools. They strive to detect, contain, and react, after endpoints are compromised. Endpoint Detection and Response (EDR) and Behavior Monitoring are the two main categories. They further fuel an already rich reserve of alerts mined by all kinds of tools and specialists: SIEM, Threat Hunters, Intrusion Detection, Incident Response tools and Responders, etc.
License costs for these endpoint tools are low; their lifecycle costs are not. They require specialists, and lots of them. Worse, they must detect as well as decisively act, as well as remediate. Still worse, time matters. Compromise costs grow geometrically with delay. All those alerts require humans in the loop. The false positives are high enough for tool administrators to opt for humans in the loop rather than risk disruptions from automated reactions that some tools offer for some indicators of compromise. Per a survey by Barkly, 42 percent of companies believe that their users lost productivity as a result of false positives.
Seek out Radically Different Endpoint Security Tools that Prevent Compromises
This recommendation isn’t as obvious as it seems. And, why does this require executive intervention? Simple, executives must give their IT/Sec-Ops leaders cover and resources to take chances testing new and different approaches to nipping cyber alerts at the endpoint. And, evaluation criteria must heavily weigh lifecycle operations considerations.