Hacking The Human Side Of The Insider Threat

  /     /     /  
Publicated : 22/11/2024   Category : security


Hacking The Human Side Of The Insider Threat


NSA-Snowden affair and the mechanics of tracking human behavior



The details on how a young systems administrator for an NSA contractor was able to access and walk away with highly classified program information from the super-secretive agency may never be fully revealed, but the Edward Snowden case has spurred debate over how best to catch a rogue insider before he incurs any damage.
Theres no way to stop a determined insider from leaking or stealing what he knows if he can get his hands on it, but there are ways to track users as humans, rather than by just their use of company equipment or their network traffic, some experts say.
That would mean establishing a baseline for, say, Snowdens daily work activities. Its not so much behavior on [technology] assets, but looking at ways to identify change in behaviors as a person starts to steal or goes through some espionage activity, says Chris Kauffman, managing partner at software firm Sphere of Influence. If they have classified information and have been granted access to it, they can have it on their desktop ... But if their behavior starts to change in their patterns of life, they change the websites they go to, they start frequently emailing recipients, or the times of day they work changes dramatically, and their patterns diverge from previous ones or those of their co-workers, those can be red flags, he says.
The anomalies tell of behavioral intent, Kauffman says. In the past few years, companies working on behavioral analysis have been using more advanced analytics. But we dont see that kind of focus on insider threat technologies.
[A determined user or contractor hell-bent on leaking data cant be stopped, but businesses should revisit their user access policies and protections. See
NSA Leak Ushers In New Era Of The Insider Threat
.]
Security incident event monitoring (SIEM) and data leakage prevention (DLP) monitor assets, not people, he says. They are more focused on behavioral analysis of the use of the asset ... they monitor use of data, patterns in bandwidth. If those behaviors spike beyond a baseline, then theres an alert. So it only does so much good if a person [deviates from a] pattern.
Management also must take a more proactive role in identifying possible users going or gone bad since technology cant catch everything. Management has responsibility for oversight of its workforce members. Unfortunately, a lot of them dont take an active role in monitoring user activities, says Andy Hubbard, senior security consultant at Neohapsis.
Bradley Mannings siphoning of gigabytes of classified information was a classic case study of a low and slow insider threat that slipped under the radar of traditional security monitoring systems, Kauffman says. Manning did his everyday job and occasionally grabbed a classified document, he says.
For the past two years, Kauffmans company has been conducting research and development in the area of analyzing human behavior to detect and quell insider threat incidents. The company plans to ship a product from the R&D by the end of this year, he says.
The underlying concept is baselining and monitoring the users normal workday behaviors or patterns of life. If Snowden had begun visiting different websites than his co-workers, for example, the human behavioral alarms could have been sounded, Kauffman says.
Theoretically, you could have a system in place that monitored specific computer network usage patterns of Snowden and all of the people he worked with, he says. That could match how his behaviors differed from others on his team, for instance: Teams tend to behave similarly in their usage patterns, for instance, he says.
A software development team working on a product visit similar types of websites in their research, he says.
This approach differs from the signature and previously defined scenarios of many existing security monitoring technologies, he says. The human behavior approach would rely more on the software learning the norms and spotting patterns and red flags -- including a user employing an anonymizer or other atypical technology.
There are challenges with the human behavior technology, including the massive amounts of data required for analysis and the age-old problem of false positives. Our emails and website visits might change every day, so [the system] has to take that into account as part of the behavioral profile to avoid constantly sending out false positives, he says.
But like anything, the technology isnt fool-proof. It has to be part of a layered defense. Its not meant to replace legacy, rules-based engines: It really has to complement it, he says.
Theres no such thing as 100 percent water-tight security -- we all have to recognize that were trying to reduce the risk of our company being the next one having to show its red face in the newspapers, says security expert Graham Cluley.
Have a comment on this story? Please click Add Your Comment below. If youd like to contact
Dark Readings
editors directly,
send us a message
.

Last News

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security

▸ DHS-funded SWAMP scans code for bugs. ◂
Discovered: 23/12/2024
Category: security

▸ Debunking Machine Learning in Security. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Hacking The Human Side Of The Insider Threat