Does Your Security Data Mesh With Risk Metrics?

  /     /     /  
Publicated : 22/11/2024   Category : security


Does Your Security Data Mesh With Risk Metrics?


Normalizing security data spewing from tools across the enterprise is a key step in creating a consistent set of metrics to use in managing risk



With so much data streaming real-time from network logs, vulnerability managers, infrastructure monitoring tools, and security appliances across the enterprise, sometimes one of the most difficult first steps IT risk managers must make in developing a security metrics program is indexing that data into a set of consistent risk scoring that makes sense in the board room.
Youve got all these different controls, they all talk about assets differently, they all present different information, says Dwayne Melancon, CTO of Tripwire. So how do I roll that up into a small number of indicators that actually helps me develop confidence that Im secure or my risk score is going down?
Its not an easy question to answer, he says, but it starts through some kind of data normalization process. Data normalization helps organizations make better apples-to-apples comparisons, or at very least something close. Apples-to-oranges is a better evaluation model than apples-to-lettuce, after all.
[Are you governing without good metrics? See
Governance Without Metrics Is Just Dogma
.]
A project to normalize security metrics should focus on building a key set of security risks that can be evaluated through quantifiable, consistent, and measurable metrics over time, says Steve Schlarman, eGRC solutions manager at RSA, explaining that these metrics shouldnt be overly complex for the metric owners. If the data takes too long to compile, report, or evaluate, then the metric owner will not be able to report consistently over time.
In order to normalize security data and tie it to metrics, Melancon says to start with the business first. Doing so will establish a shorter and more relevant list of data feeds that need to be normalized.
I think one of the tendencies that a lot of security people tend to have is they start with the controls, and they end up with a lot more controls than they otherwise may need, he says.
So if it is a public company, get a sense for how the company makes money by reading annual reports and thinking critically about the biggest risks that threaten key revenue streams.
Then back up and say, OK, what controls do we have that help us monitor and get better confidence around those things? he says, explaining that the data from those tools will be the ones around which organizations should start building security performance indicators and risk scores. As they seek ways to develop those, Melancon warns security professionals to remember that just as they prioritize security spending based on risk, they also need to prioritize how they examine and normalize data based on how important certain assets are to the business.
Where this falls apart is a lot of organizations try to apply the same level of rigor across everything, and you just choke everybody, Melancon says. Either youre too bureaucratic or too slow, or youre always frustrated. So if you start with what are our top critical services and assets associated with those, then you can at least adjust the shape of your spending to match the shape of your risk.
By evaluating business assets first to determine which data should be normalized and included within the metrics program, organizations will be able to streamline and simplify how many controls they need to have their measurements normalized. Not only does that help with consistency, but also responsiveness. A realistic goal of data normalization is to be able to analyze useful data in real time, especially if the purpose is risk assessment and management, says Rick Aguirre, president of Cirries Technologies. You want to know about threats as they happen, not three days later in some data pool somewhere. By far, most of the data generated by networks and devices is not useful. As an organization establishes normalization processes for better metrics, it is crucial to clearly define seven core attributes for each metric, Schlarman says. They are the metric description, metric measurement process or formula, metric ownership, metric scope, source of the metric, measurement frequency, and trend expectation. From there, the risk management team should be offering some sort of forum for metric owners to report on a consistent basis and do root cause analysis.
The main goal is to set up a sustainable program, not a one-time effort, he says. Then, over time, metrics can be activated and retired as necessary within the program.
At the moment, the industry is still a little bit of the Wild, Wild West in the way that most organizations apply security or risk ratings to their asset data, Melancon says. Some organizations apply confidentiality, integrity, or availability ratings to their assets and use that as a basis. Others might use some of the NIST frameworks to do so. One framework that he sees as having some good potential is the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring (CAESARS) Framework, jointly developed by NIST and the DHS, which provides a good foundation for risk scoring.
The concept is you take a whole bunch of different controls, like antivirus, IDS, IPS, file integrity monitoring, database activity monitoring, and all of these different scores, and roll them up into one composite indicator, and then you use that to track whether your risk is going up or down overall, Melancon says. The idea is great, but the execution is really hard.
He believes that to become less unwieldy, the industry needs to come up with a lighter weight version of something like CAESARS so that if an organization has a limited budget or man-hours, they can still pinpoint five to 10 metrics to focus on.
From there, it will be much easier to offer line-of-business executives a consistent set of key performance indicators that they can easily understand. This is a critical point, says John Johnson, global security program manager for John Deere, who explains that executives dont like things like security heat maps or fancy threat graphics that get down in the weeds of security operations.
Executives want to see the most boring stuff in the world. They just want to see a dot that follows a straight line, Johnson says. They dont want a slope or a peak -- they dont want to know there was some virus out there last week. They just want to know, are you hitting these key performance indicators you are tracking and is what youre doing making sense?
Have a comment on this story? Please click Add Your Comment below. If youd like to contact
Dark Readings
editors directly,
send us a message
.

Last News

▸ Teen admits hacking Lady Gagas computer. ◂
Discovered: 05/01/2025
Category: security

▸ Report from White House on Healthcare Exchanges Security. ◂
Discovered: 05/01/2025
Category: security

▸ Senator suggests cybersecurity norms ◂
Discovered: 05/01/2025
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Does Your Security Data Mesh With Risk Metrics?