There seems to be an ongoing, almost incessant debate about the concept of responsible disclosure and whether it’s helpful or not for white hat hackers and other security researchers to publicly reveal details about the information security vulnerabilities they find.
The debate has been getting a lot of mainstream media exposure because of phenomenon such as Heartbleed and Shellshock, and the question I increasingly field from said mainstream media is: “What do we do in the face of this growing mountain of disclosures?” My answer is always, “Smile.” Because these disclosures are good things. Responsible disclosure is an incredibly valuable tool that ensures infrastructure remains reliable.
Opposition to the idea is usually grounded in the argument that telling the black hats which doors are unlocked is unnecessarily risky.
As if they don’t already know.
The fact is, whether motivated by avarice, ideology or nationalistic pride, the bad guys are already doing a pretty good job of probing the defenses of every network on the planet and sharing what they find among their own. In order for the rest of us to keep up (or, rather, not fall so far behind), we have little choice but to share information. Industry’s response in the wake of reports of vulnerabilities inherent to the likes of Heartbleed and Shellshock were instructive. Once brought to light, the speed with which most organizations moved to patch their holes—holes that were no secret to the Internet’s boogeymen—was (or should have been) reassuring.
During a recent Christian Science Monitor panel discussion on cybersecurity entitled Developing America’s Edge, Jeff Moss, founder of Black Hat and DEF CON and a co-chair of the Department of Homeland Security Cybersecurity Task Force, commented that (my paraphrase) “if we haven’t solved information security after twenty years, what makes us think we can as the problems grow more complex?”
Actually, we’ve been grappling with the problem of information security for a lot longer than two decades. Here’s a telling excerpt from the book Locks and Safes: “The Construction of Locks” by A. C. Hobbs, published in 1853.
“A commercial, and in some respects a social doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery.
Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock, let it have been made in whatever country, or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.”
(Where Hobbs uses lock, substitute the word “application” and this treatise would be as relevant in 2014 as it was more than a century-and-a-half ago.)
Today, as in the 1850s, software vendor attitudes with regard to disclosure are evolving from “criminal” toward “helpful,” but we haven’t yet reached the point where anyone can comfortably submit a vulnerability to any vendor. Some, like Google and Microsoft, are doing a good job of setting the pace, and companies like Bugcrowd are making it easier to manage the process, so we’re moving in the right direction, but the road is dark and full of terrors.
Where things get really interesting is when you start to think at cloud-scale. For example, if we solve for patching by delivering software-as-a-service, what happens when one finds a vulnerability? Ostensibly if the vendor doesn’t fix it in the allotted responsible disclosure timeframe and the vulnerability is published, then the person who discovered the vulnerability is actively harming every single one of that vendor’s customers. Further, is it acceptable that all vulnerabilities be fixed in 45 days?
Then the question becomes who is to blame: The person who found the vulnerability or the vendor that failed to patch it?
The next evolution of responsible disclosure will need a governing body that sets appropriate deadlines (and maybe payouts) for vulnerability disclosure by interfacing between researcher and vendor on metrics like severity, likelihood of exploitation, and impact to the vendor’s business (i.e., patching this vulnerability means we can’t push out a major iteration which will impact bottom line, etc.). But before we get to that point, we should err on the side of openness and continue further down the path.
Tal Klein is vice president of strategy and marketing at Adallom.
No comments:
Post a Comment