How Bad Data Alters Machine Learning Results

  /     /     /  
Publicated : 22/11/2024   Category : security


How Bad Data Alters Machine Learning Results


Machine learning models tested on single sources of data can prove inaccurate when presented with new sources of information.



The effectiveness of machine learning models may vary between the test phase and their use in the wild on actual consumer data.
Many research papers claim high rates of malware detection and false positives with machine learning, and often deep learning, models. However, nearly all of these rates are within the context of a single source of data, which authors use to train and test their models.
Machine learning has become more advanced but isnt used enough yet in security, says Hillary Sanders, data scientist for Sophos data science research group. She anticipates usage will increase in coming years to address the rise of different forms of malware.
Historically, Sanders explains, static signatures have been used to detect malware. This method doesnt scale well because software needs to be updated with new signatures as more malware is created. Machine learning and deep learning automatically generate more flexible patterns, which could better detect malicious content compared with stricter static signatures.
This enables us to move away from signature detection and more toward deep learning detection, which doesnt really require signatures and is going to be better at detecting malware that has never been seen before, she says.
The challenge is in creating a deep learning model to detect forms of malware that dont yet exist. Sanders explains the problem of using current data to test these models, which would ideally be used to detect future malware strains in different clients and environments.
We cant be sure the data we trained on is going to be super similar to the data in organization deployment, she explains. If were training on data that isnt like the data we want to eventually test on, our model might fail catastrophically.
In current machine learning research, accuracy estimates dont consider how systems will process future data. Sanders says modern publications lack time-decay analysis and sensitivity analysis, which could lead to a lack of trust among those who rely on this information.
If researchers forget to focus on sensitivity testing and time decay, our models are liable to fail catastrophically in the wild, she explains.
Time-decay analysis simulates how the accuracy of data decreases over time, she explains. Consider a dataset with information from January through April. If a machine learning model is trained on data before February 1, it will do well on processing data from January, but accuracy will begin to decay after February.
Sensitivity analysis tweaks inputs for machine learning models to see how output is affected. Sanders will present sensitivity results in her
presentation
 titled Garbage In Garbage Out: How Purportedly Great Machine Learning Models Can Be Screwed Up By Bad Data at this years Black Hat USA conference in Las Vegas.
This analysis will include a deep learning model designed to detect malicious URLs, which was trained and tested using three sources of URL data. As part of her discussion, shell dive into what caused the results by focusing on how the data sources are different, and higher-level feature activations the neural net identified in some datasets but not in others.
For security teams, the end goal with deep learning is to stop malware. If training and testing data is biased compared with real-world data, models are likely to miss out.
You ignore the thing you could be optimizing for, says Sanders. You could miss swaths of malware.

Black Hat USA returns to the fabulous Mandalay Bay in Las Vegas, Nevada, July 22-27, 2017. Click for information on the
conference schedule
 and
to register.
 
Related Content:
Deep Learnings Growing Impact on Security
The Rising Tide of Crimeware-as-a-Service
How End-User Devices Get Hacked: 8 Easy Ways
The Detection Trap: Improving Cybersecurity by Learning from the Secret Service

Last News

▸ 7 arrested, 3 more charged in StubHub cyber fraud ring. ◂
Discovered: 23/12/2024
Category: security

▸ Nigerian scammers now turning into mediocre malware pushers. ◂
Discovered: 23/12/2024
Category: security

▸ Beware EMV may not fully protect against skilled thieves. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
How Bad Data Alters Machine Learning Results