Security in Knowing: An Interview With Nathaniel Gleicher, Part 1
Nathaniel Gleicher, former Director of Cybersecurity Policy for the Obama White House and ex-senior counsel for the US Dept. of Justice’s computer crimes division, knows something about security.
You know youve been hammered by bad news when you start looking for a silver lining in a
6 million name identity heist
, but here I am. And I want to be clear, for Verizon and (especially) its customers, there really isnt a silver lining. Personal data theft is awful under any circumstances. That said, I still had to cover the breach, and that meant I got to talk to some interesting people.
One of those interesting people happens to have been Nathaniel Gleicher, former Director of Cybersecurity Policy for the Obama White House and ex-senior counsel for the US Dept. of Justice’s computer crimes division. Gleicher is now head of cybersecurity strategy for Illumio. He and I got together on a phone call and started talking about the Verizon brief but quickly expanded into a conversation about the wider issues of security and privacy.
The conversation was so long and so filled with information on how to protect an environment that were presenting it in two pieces. One of the keys introduced in Part One is the idea that the Secret Services Protection Detail provides a solid model for defending your network and computers. Tomorrow, in Part Two, well talk about what that means and how protection doesnt have to mean unnecessary limitations for your users.
What follows is an edited version of our conversation.
Curt Franklin
: I want to start by asking a huge question: Given that this is the latest in a long string of enormous breaches, is there anyone left on the Internet whose data is intact and secure?
Nathaniel Gleicher
: I think we used to say that there are two types of companies out there: companies that will admit theyve been breached and companies that dont know it yet. Im inclined to say that you can say the same thing about individuals, that are individuals whove had some of their data stolen and it those that dont realize it. The funny thing about this breach is it was 6 million users. But the honest truth is, thats large; the previous breach that happened eight months ago was eight million voters. The numbers are so large now that its almost hard to keep track.
CF
: When 6 million is getting into the category of run of the mill breach, its possible weve reached some sort of massive tipping point.
NG
: The common thread is that all of them [involve] information exposed through misconfiguration and user error. Its a reminder that no matter how disciplined your security team is, if your organization has to solve security problems manually theyre going to make a mistake. And if your security is only sort of one layer deep, that mistake is eventually going to expose a whole bunch of data.
CF
: Is this validation of the people who have doubted the security of cloud services? Is there something inherent in the configuration of cloud services that makes them either more difficult to configure in a secure way or simply more susceptible to misconfiguration?
NG
: The interesting thing about this particular example is that were talking about S3 [Amazon Web Services storage service] specifically. By default, S3 buckets are configured to not be publicly addressable and publicly accessible. Someone at the organization went in at night and actually changed the permissions for this S3 bucket to make it publicly addressable. So I think the story here is a little broader than S3 in some sense.
The story is any manual security control is going to be at risk. These are both examples of cloud breaches, but Ive seen statistics that suggest that something upwards of 97% or even 99% of all breaches involve a misconfigured firewall at some point. I dont think the lessons is about cloud in particular. Cloud can certainly be secure. The protocols and tool that you use may be different from the data center but it can certainly be secure.
The problem is our environments are so complex, theyre so hybrid, theyre so dynamic, that expecting humans are going to keep up with this and sort of handle this problem on their own is just impractical. What you really need is tools that make it so you dont need to solve these on a retail level and you dont need to make sort of specific one-off decisions about security protocols.
Want to learn more about the technology and business opportunities and challenges for the cable industry in the commercial services market? Join Light Reading in New York on November 30 for the 11th annual
Future of Cable Business Services event
. All cable operators and other service providers get in free.
CF
: Well it sounds strangely as though youre leading to one of those conclusions that we hear at Gartner and other conferences where the takeaway line is that machine learning and system automation are the tools that are going to help save us from ourselves. Is that in fact where youre going or is there another lesson that you want to make sure we learn?
NG
: I think automation orchestration is a very important component. One of the really interesting things about AI and machine learning and whatever other sort of flavor you want to use to describe intelligence systems is that theres a lot of buzz around it, and they become very buzz-wordy. The thing that I think is interesting though, is that theres a heavy focus on using intelligence systems to do aberration detection, pattern matching, solutions like this. And whats interesting about it to me is that humans are actually pretty good at aberration detection.
We were sort of built as aberration-detecting machines. If youre living out in the wilderness and you see something different than youve ever seen before, that could be a risk. So were really good at seeing those things. What were not terribly good at is dealing with these incredibly complex and dynamic environments. Weve created these incredibly hybrid and distributed networks and computer systems. AI is a really important tool, or here Ill say not general-purpose [AI] but specific AI and machine learning and intelligence systems. But what would be really powerful and what I would like to see more of is for us to turn AI on the problem that we dont solve terribly well.
Turn AI on simplifying the environment and choices we have to make. Turn AI on making sense of these complex environments and let humans do what humans do really well, which is aberration detection if you give them an environment that they can understand and work in.
CF
: Youve talked about letting the machines do what they do well so that humans can do what we do well, and I certainly see that. But at a deeper level is the ultimate solution to design simpler systems? Is the answer to return to a certain Zen-like simplicity in the systems we design so they get back to some sort of human scale and were all tapping out character-based things on a simple character-based computer?
NG
: I suppose from a security perspective it might be nice if we could end up there. Practically, no I dont think thats the answer and realistically, I dont think that would ever happen anyway, in part because of the reason why were building more complex systems. Were doing it for very specific business reasons.
These systems are powerful and they let us do things we could never do before, and the drive to leverage that complexity isnt going to go away. One of my convictions is that a lot of the problems we face in cybersecurity actually have answers that were worked out in the context of physical security. We tend to tell ourselves that the network is so dynamic and so completely different that theres nothing that we can learn. But theres actually quite a bit.
If you look at some of these lessons its striking the way in which the challenges were facing pop into relief. So Ill give you an example: think about the way virtually any effective physical security team operate. I like to use the Secret Service because I think the way they protect the president is a surprisingly good model for the way we protect our data centers, because their goal is risk management.
The president is always exposed in the same way that the valuable tools that were defending are always exposed in the way they work and the way they invest in security. If you draw a pyramid and you break it into four horizontal slices and you put understand in the bottom the biggest slice, then you put control above it and you put detect above that and then in a little sort of tiny pyramid all the way at the top you put respond -- this is how physical security teams invest in security and its written like this very intently because the bottom of the pyramid is where they invest the most.
If you think about protecting the president, we tend to think about agents standing in front of a president with guns. But actually most of the presidents security starts months before he ever shows up. If the presidents going to speak somewhere, the Secret Service shows up six months in advance and invests heavily in understanding that environment and in exerting control or transforming the environment to make it advantageous for them. Then they put detection in place, and then they respond if they detect a change in the environment. But they invest first to understand it because you cant control an environment you dont understand. You cant detect a changing environment you dont control, and you cant respond effectively if you dont have all those pieces.
Related posts:
Cybersecurity: More a People Than a Tech Challenge?
The Stress of Being CISO
Deciphering the Threat Landscape
— Curtis Franklin is the editor of
SecurityNow.com
. Follow him on Twitter
@kg4gwa
.
Tags:
Security in Knowing: An Interview With Nathaniel Gleicher, Part 1