Taming Bad Inputs Means Taking Aim At Weird Machines

  /     /     /  
Publicated : 22/11/2024   Category : security


Taming Bad Inputs Means Taking Aim At Weird Machines


Overly accommodating platforms and protocols let attackers use inputs like code, essentially allowing attackers to program an unintentional virtual machine



In 2002, Microsoft CEO Steve Ballmer argued that vulnerabilities are a fact of life: Lets acknowledge a sad truth about software: any code of significant scope and power will have bugs in it, he stated in a
much publicized memo
at the time.
While a decade of vulnerability research and software development have only reinforced the claim, security and academic researchers have continued to look for ways to eradicate bugs -- or at least minimize their impact -- on systems. One of the most practical approaches, known as LangSec, evolved from studying the methods that hackers took in exploiting software using a systems unintended reactions to crafted inputs essentially as a programming language.
Because of this computational power, the code that handles complex data is really indistinguishable from a virtual machine to which the data serves as byte code, making any input a program, says Sergey Bratus, research assistant professor at Dartmouth College and one of the researchers leading the LangSec project.
LangSec
, created by technologist Meredith Patterson and cryptographer Len Sassaman, aims to eliminate the scattershot techniques for sanitizing inputs and instead create the simplest set of allowable inputs -- a language -- that minimizes the risk of unintended consequences. In a presentation at the Shmoocon conference in February, Patterson and Bratus
recommended
that software developers direct data through a program component, or recognizer, designed to check the validity of the input against a minimal language.
The current trend in accepting as large a variety of inputs as possible -- and going so far as to attempt to correct potential input errors -- contributes to insecurity, Bratus says. Allowing a complex language turns a software program into a weird machine that does not act as expected and can be controlled by an attacker, he says.
We are doing input processing wrong, Bratus says. The part of the program that handles inputs should be as limiting as possible. The theory provides good guidance on what that means.
While user inputs are a major cause of software vulnerability, many other classes of security weaknesses exist. The LangSec approach provides one tool for programmers to create more secure code, says Fred Schneider, Samuel B. Eckert Professor of Computer Science at Cornell University.
What they are doing is building an input filter for their programs -- a kind of firewall, Schneider says. It will defend against some attacks, but not all.
[Taking a page from the metrics used to rank tornadoes and software vulnerabilities, attack-mitigation firms look to find a better measure of denial-of-service attacks than bandwidth and duration. See
New Metric Would Score The Impact, Threat Of DDoS To An Enterprise
.]
For more than a decade, Schneider and colleagues from Carnegie Mellon University and Harvard University have focused their efforts on creating a more general foundation for building security into the programming languages used by developers. By creating a security policy and using specific techniques to enforce the policy on programs, the approach -- known as
Language-Based Security
-- can create programming languages that force a user to write secure code.
It is possible to design programming languages in which the programmer is coerced into telling the compiler -- that is, by writing some details into the program -- enough information so that the compiler is almost certain to prevent the compilation of a program that has certain classes of bugs, he says.
Schneider and his colleagues are currently working on exporting the language-based security techniques to the real world.
Because the LangSec project came from work done in the security community and its goals are more modest, it could have an easier time being adopted, however. Already, some security-software developers have started creating recognizers to secure their own projects, Bratus says.
Its about building armored recognizers, and building the fortifications and building it properly according to science and engineering, he says. At some point, you will stop fearing your inputs, and much of this uncertainty and doubt will go away.
Have a comment on this story? Please click Add Your Comment below. If youd like to contact
Dark Readings
editors directly,
send us a message
.

Last News

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security

▸ DHS-funded SWAMP scans code for bugs. ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Taming Bad Inputs Means Taking Aim At Weird Machines