Why Are We So Slow To Detect Data Breaches?

  /     /     /  
Publicated : 22/11/2024   Category : security


Why Are We So Slow To Detect Data Breaches?


Poor instrumenting of network sensors, bad SIEM tuning, and lack of communication between security team members allow breaches more time to fester



Security breach response times can be a crucial factor in determining the difference between a minor security incident and a major data breach with far-reaching business effects. And, yet, most organizations today are slow to detect breaches. Whats worse, many have a deflated sense of how long it really takes for them to sniff out an attacker on their networks. This lack of speediness and lack of awareness of that weakness plays right into the hands of attackers who are crafting long-term attacks with the strategy of staying hidden on network resources for extended periods of time.
The longer it takes to respond, the more firmly rooted the attacker will become, and more difficult and costly it will be to find and remove all of their implants, says James Phillippe, leader of threat and vulnerability services for the U.S. at Ernst & Young. More importantly, the longer it takes, the more likely an attacker is to find and exfiltrate the organizations secret sauce.
Fighting A Perception Problem
The difficulty in achieving timely detection is that many line-of-business and even IT leaders think their organizations are doing a good job already. This week, a survey out from McAfee that questioned 500 senior IT decision makers had them reporting that it took an average of 10 hours to detect a breach. But other breach statistics and anecdotal evidence provides evidence to the contrary.
According to the Verizon Data Breach Investigation Report, 66 percent of breaches took months or even years to discover. And a recent Ponemon Institute report sponsored by Solera Networks, a Blue Coat company, found that, on average, it is taking companies three months to discover a malicious breach and then more than four months to resolve it.
This misplaced confidence in their response demonstrates the disconnect between business leaders and the security team that can happen, says Gretchen Hellman, director of product marketing of SIEM for McAfee.
The statistics that suggest long times for breach detection are backed up with plenty more anecdotal stories from security professionals in the trenches.
Many organizations simply dont have enough tech and security staff to notice these breaches when they occur. I worked with a university that didnt notice they had a data breach for almost six months, until a governmental organization notified them that an attack had likely been carried out against their servers, says Jonathan Weber, founder of Marathon Studios, who believes that most data breaches are so slow to be discovered because attacks today rarely offer external disruption to essential services. Once the initial leak event has passed, there are little or no indicators of the breach until the hackers return or the data surfaces elsewhere.
Because organizations dont know what they dont know, the perception problem lingers. The disconnect stems from the very same fundamental visibility shortcomings that are allowing attackers to extend their stays on enterprise networks in the first place.
At this point, organizations need to increase their visibility into whats happening in their enterprises and focus on eliminating those cybersecurity blind spots, says Jason Mical, vice president of cybersecurity for AccessData.
One security leader, Mike Parrella, director of operations for managed services at Verdasys, was more blunt about why he believes organizations have not worked to improve visibility on their networks.
The main reason is because businesses and government alike are filled with idiots and ostriches, he says. People are simply not looking for a leak -- they would rather not look, not be bothered, not spend to solve the problem, and so they are not finding. They prefer to outrun their risk.
Instrumenting Sensors For Detection
To be fair, attackers have invested incredible amounts of money and time into devising methods to breaking in and stealing data under the proverbial radar. But experts say there are ways to adjust the monitoring and intelligence paradigms at enterprises to account for that.
Rather than thinking of defending the enterprise like a bank vault with a big door, says Dr. Mike Lloyd, CTO of RedSeal Networks, more organizations should think of it as similar to the way youd secure a city.
Its big, its sprawling, it changes all the time without advance notice. You have to think about maps, about sensors, and you have to know where the pinch points are -- where you have threats like ammonia trains running on lines that happen to cross the same cheap land you built your football stadium on, he says. Thats why the industry is talking so much about Big Data -- the hope [currently unfulfilled] is that if we can pile together all the overwhelming separate piles of sensor data, making an even bigger, even more overwhelming mountain, that well be able to make sense out of it and pick up the patterns of attack.
Lloyd believes that most network monitoring sensor infrastructure today is poorly instrumented, and hes not alone.
More often than not, mistakes are made in the poor placement of monitoring technology, says Peter Tran, senior director of RSA Advanced Cyber Defense Practice. From a strategic design perspective, enterprises need to approach detection in terms of behaviors indicative of exploitation rather than static rules triggered on known bad indicators of compromise. This is a shift to intelligence-driven security and a break from network-centric security to data in motion based on behavioral-driven analytics.
Starting out, Lloyd recommends organizations take three first steps. First they should map infrastructure so they get a lay of the land to figure out where to put sensors. Next, they should identify obvious weak points. And, finally, they should start to design zones into the infrastructure so that monitoring can be done more easily at zone boundaries.
[How have attackers managed to break AV with a glut of malware? See
10 Ways Attackers Automate Malware Production
.]
Data Analysis Is Key
As important as it is to determine where sensors are put, it is equally important to adjust what exactly theyre looking for, says Wade Williamson, senior security analyst for Palo Alto Networks. As he puts it, most security products arent necessarily designed to detect breaches, per se.
Fundamentally, if you look at most networks, security is overwhelmingly focused on detecting and blocking a malicious payload. This is a pretty reasonable approach, but its also incomplete, he says. Breaches rely on a host of tools to investigate, gather data, and communicate to the remote attacker, and detecting these tools becomes just as important as stopping malicious payloads.
He recommends that organizations be on the lookout for custom tunnels, unauthorized proxies, RDP, and file transfer applications.
But appropriately setting up sensors is the easy first step. From there, organizations have to figure out how to put to good use all of the data those sensors are spewing out. Its this data that holds the key to driving down the time to detect breaches.
One of the most powerful drivers is data, both unstructured and structured, particularly cross-correlated, analyzed from external sources relative to historical data and trending, Tran says. This strategically gives an enterprise the ability to trend, score, and predict how likely they are to be targeted.
According to Phillippe, the problem is that even at organizations that do have security information and event management (SIEM) tools in place, most have not tuned them well.
A well-tuned SIEM is the heart of a security operations center and enables alerting to be accurate and complete, he says. That way when an analyst gets an alert, they know all of the necessary context to respond quickly and comprehensively.
As essential as tools are to reducing the time it takes to detect a breach, even more critical is how well the people who run those tools put them to good use.
Getting a list of convicted systems that are doing remote callbacks indicating a compromise by a botnet is one thing; getting the boots on the ground to find the machine, capture it, analyze the breach, and reimage can be a daunting experience for a large enterprise, says Ray Zadjmool, principal consultant for Tevora Business Solutions. Where is it? Who owns it? Who to call? All of this translates to slow detection and decreased response time.
One particular difficulty that organizations face is in streamlining the collaboration between various security and operations team members. Even with all of the right data residing within the organization as an aggregate, it is very easy to fail to put all of the puzzle pieces together due to a lack of coordination.
Right now, most organizations still have disparate teams, each using several disparate tools, Mical says. They have to correlate all the critical data manually. It causes dangerous delays in validating suspected threats or responding to known threats.
Have a comment on this story? Please click Add Your Comment below. If youd like to contact
Dark Readings
editors directly,
send us a message
.

Last News

▸ Travel agency fined £150,000 for breaking Data Protection Act. ◂
Discovered: 23/12/2024
Category: security

▸ 7 arrested, 3 more charged in StubHub cyber fraud ring. ◂
Discovered: 23/12/2024
Category: security

▸ Nigerian scammers now turning into mediocre malware pushers. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Why Are We So Slow To Detect Data Breaches?