Cloudbleed Lessons: What If Theres No Lesson?

  /     /     /  
Publicated : 22/11/2024   Category : security


Cloudbleed Lessons: What If Theres No Lesson?


Theres nothing to be done, isnt an encouraging lesson from a security disaster, but that may be the biggest takeaway from Cloudbleed.



Whenever a serious computer or network security issue becomes public, one of the first questions IT professionals ask is, What lessons can we learn? Its a polite way of phrasing the real question: How do I keep my company out of the news for something like this? But whats a professional to do when the best answer to the question might well be that absolutely nothing any reasonable company might have done would stop the problem? Thats a very different question.
2017 has seen a spate of news generated by errant keystrokes, from the
Cloudbleed
vulnerability that exposed millions of pieces of personally identifiable information to the
AWS outage
that brought large portions of the Internet to its knees. Finding a single keystroke going awry makes the classic needle in a haystack analogy insufficient. Finding a needle in a thousand-acre field of haystacks might be more like it -- and thats something that may simply go beyond reasonable.
Dont get left in the dark by a DDoS attack -- learn best practices to strengthen the security of your network. Join us in Austin at the fourth-annual
Big Communications Event
. BCE brings you face-to-face with hundreds of speakers and thousands of industry thought leaders. Theres still time to register and communications service providers get in free.
Bill Curtis is someone who seems well suited to answer questions involving command and software quality -- especially software quality. A founder of the Consortium for IT Software Quality (CISQ), Curtis was the leader of the project that created the Capability Maturity Model (CMM) for both software and people. A long-time university professor, Curtis is now senior vice president and chief scientist at CAST and remains a member of the CISQ board of directors.
In a telephone interview with Light Reading, Curtis was reluctant to criticize the software developers at Cloudflare for the incident that became known as Cloudbleed. There are things that are humanly possible in terms of testing and detection, and then there are things that are just so far out there, they can happen and its a tragedy when they do, but its hard to say that they were negligent in their work because it really would have taken some bizarre thinking into the conditions that could occur, he said.
Curtis said that part of the problem of finding the vulnerability is that it did not, in all likelihood, involve a programming mistake. Instead, it was the result of using a parser built on Ragel (not developed in-house by CloudFlare Inc. ) in a very particular, very specific set of circumstances. Within those circumstances, a buffer overflow could occur, and personal information could be released.
The buffer overflow was, according to Curtis, part of what made early detection of a problem so difficult. Heres the thing about buffer overflows; we dont really do a lot of analysis on buffer overflows and the reason is that theres a zillion false positives -- it just creates havoc, Curtis said. Some of our competitors go after buffer overflows and they get flooded with false positives.
For most of these buffer overflows its really the context that makes that code cause an overflow. And youve got to understand the context, which is not easy. Thats a whole nother level of analysis and if you read
the piece that Cloudflare wrote
they listed all the conditions that had to occur, Curtis explained.
Thats a nightmare to go find through static analysis, or even if youre a smart guy, Curtis said, pointing out that there is no reasonable testing regimen that can be expected to find all the issues in complex, modern software systems. Thats the problem we have in software; the incredible complexity that weve gotten into now and the difficulty of detecting these [issues], Curtis said.
He pointed to a software quality regimen that found an extraordinary number of issues, but went beyond the effort that most organizations can afford -- the detection and testing regiment for the avionics systems on the Space Shuttle. These guys were at a point where the defects they were detecting were all over ten years old in the code. They werent generating new defects, Curtis said. And their analysis, detection, and testing were so thorough, in fact two-thirds of all their effort was in testing.
The professionals in the software development group on Space Shuttle avionics spent much of their time coming up with bizarre scenarios involving anomalies that no one had ever seen, but that were not impossible according to the laws of physics. Commercial developers would have to go into the same sort of imagination exercise to find interactions like the one that led to Cloudbleed. Youd really have to be thinking, What really isnt probably going to happen but possibly could? If all these different conditions occurred, youd say that there were all these bizarre little things that had to happen in order for buffer overflow to occur, Curtis explained.
Curtis thinks that the best prospect for avoiding Cloudbleed-like future problems may lie with the computers themselves. For these things that are context dependent and very tricky, Im hoping that we can apply machine learning techniques, that maybe the machine learning can go out and begin to understand some of these bizarre contexts and find some of the things that might have been innocent but go on to create some serious problems, he said.
Until machine learning becomes the norm, Curtis believes that rapid response to revealed issues is the practical model for the future, especially since theres no blame to be placed on the development program at Cloudflare. If this could have occurred frequently then, yeah, they screwed up. But if they couldnt have anticipated the complex set of circumstances required for it to occur then they werent negligent, he said. You know, we get a lot of this in complex systems, where people just couldnt have imagined all the interactions that led to the problem. And thats something were going to live with more and more as these systems get more complex and we have different pieces coming from different vendors.
— Curtis Franklin, Security Editor,
Light Reading

Last News

▸ Some DLP Products Vulnerable to Security Holes ◂
Discovered: 23/12/2024
Category: security

▸ Scan suggests Heartbleed patches may not have been successful. ◂
Discovered: 23/12/2024
Category: security

▸ IoT Devices on Average Have 25 Vulnerabilities ◂
Discovered: 23/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Cloudbleed Lessons: What If Theres No Lesson?