For years the government has refused to talk about or even acknowledge its secret use of zero-day software vulnerabilities to hack into the computers of adversaries and criminal suspects. This year, however, the Obama administration finally acknowledged in a roundabout way what everyone already knew—that the National Security Agency and law enforcement agencies sometimes keep information about software vulnerabilities secret so the government can exploit them for purposes of surveillance and sabotage.
Government sources told the New York Times last spring that any time the NSA discovers a major flaw in software it has to disclose the vulnerability to the vendor and others so that the security hole can be patched. But they also said that if the hole has “a clear national security or law enforcement” use, the government can choose to keep information about the vulnerability secret in order to exploit it. This begged the question about just how many vulnerabilities the government has withheld over the years to exploit.
In a new interview about the government’s zero-day policy, Michael Daniel, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues, insists to WIRED that the government doesn’t stockpile large numbers of zero days for use.
“[T]here’s often this image that the government has spent a lot of time and effort to discover vulnerabilities that we’ve stockpiled in huge numbers … The reality is just not nearly as stark or as interesting as that,” he says.
Zero-day vulnerabilities are software security holes that are not known to the software vendor and are therefore unpatched and open to attack by hackers and others. A zero-day exploit is the malicious code crafted to attack such a hole to gain entry to a computer. When security researchers uncover zero-day vulnerabilities, they generally disclose them to the vendor so they can be patched. But when the government wants to exploit a hole, it withholds the information, leaving all computers that contain the flaw open to attack—including U.S. government computers, critical infrastructure systems and the computers of average users.
Daniel says the government’s retention of zero-days for exploitation is the exception, not the rule, and that the policy for disclosing zero-day vulnerabilities by default—aside from special-use cases—is not new but has been in place since 2010. He won’t say how many zero-days the government has disclosed in the four years since the policy went into effect or how many it may have been withholding and exploiting before the policy was established. But during an appearance at Stanford University earlier this month, Admiral Mike Rogers, who replaced retiring Gen. Keith Alexander as the NSA’s new director last spring, said that “by orders of magnitude, the greatest numbers of vulnerabilities we find, we share.”
That statement, however, appears to contradict what a government-appointed review board said last year. So WIRED spoke with Daniel in an effort to get some clarity on this and other questions about the government’s zero-day policy.
Timeline of Policy in Question
Last December, the President’s Review Group on Intelligence and Communications Technologies seemed to suggest the government had no policy in place for disclosing zero days when it recommended in a public report that only in rare instances should the U.S. government authorize the use of zero-day exploits, and then only for “high priority intelligence collection.” The review board, convened by President Obama in the wake of Edward Snowden’s revelations about the NSA’s surveillance activities, produced its lengthy report (.pdf) to provide recommendations for reforming the intelligence community’s activities. The report made a number of recommendations on various topics, but the one addressing zero-days was notable because it was the first time the government’s use of exploits was acknowledged in such a forum.
The review board asserted that “in almost all instances, for widely used code, it is in the national interest to eliminate software vulnerabilities rather than to use them for US intelligence collection.” The board also said that decisions about withholding a vulnerability for purposes of exploitation should only be made “following senior, interagency review involving all appropriate departments.” And when the government does decide to withhold information about a zero-day hole to exploit it, that decision should have an expiration date.
Obama appeared to ignore the board’s recommendations when, a month later, he announced a list of NSA reforms that contained no mention of zero days or the government’s policy about using them. It wasn’t until the Heartbleed vulnerability was discovered in April, and a news report falsely claimed the NSA had known about the flaw and kept silent about it to exploit it, that the administration finally went public with a formal statement about its zero-day policy. In addition to comments given to the Times announcing the default disclosure policy, Daniel published a blog post stating that the White House had also “re-invigorated” its process for implementing this “existing policy.”
The statements, however, raised more questions than they answered. Was this a new policy or had the government been disclosing vulnerabilities prior to this announcement? What did “reinvigorated” mean? And did the policy apply equally to zero-day vulnerabilities that the government purchased from contractors or just ones that the NSA itself discovered?
Daniel says although the default-disclosure policy was established in 2010 it “had not been implemented to the full degree that it should have been,” hence the government’s use of the term “reinvigorated” to describe this new phase. The relevant agencies, he said, “had not been doing sufficient interagency communications and ensuring that everybody had the right level of visibility across the entire government” about vulnerabilities that were discovered.
What this means is that although “they probably were disclosing the vulnerability” by default, they “may not have been communicating that to all the relevant agencies as regular as they should have been.” Agencies, he says, might have been communicating “at the subject-matter expert level,” but the communication may not have been happening as consistently, in as coordinated a fashion or within the timelines that the policy dictated. This was the part, he said, that was “reinvigorated” this year “to make sure it was actually happening consistently and as thoroughly as the policy called for.”
Daniel didn’t say exactly when in 2010 the policy was initiated or what prompted it, but 2010 is the year the Stuxnet virus/worm was discovered infecting machines in Iran. Stuxnet was a digital weapon reportedly designed by the U.S. and Israel to sabotage centrifuges enriching uranium for Iran’s nuclear program. It used five zero-day exploits to spread, one of which was a fundamental vulnerability in the Windows operating system that affected millions of machines around the world, yet information about the vulnerability was kept secret so the U.S. and Israel could use it to spread Stuxnet on machines in Iran.
Asked why, if the policy had been in place since 2010 the review board didn’t seem to know about it when they made their recommendations last December, Daniel said he didn’t know. So WIRED contacted Peter Swire, a member of the review board and a professor of law and ethics at the Georgia institute of Technology, to clarify if the group had been briefed on the existing zero-day policy before they wrote their report. Swire said they had, but parsed his words carefully as he explained that the group’s recommendations to the president stemmed from the fact that the policy wasn’t being implemented as the board thought it should, noting that certain presumptions about the existing policy needed to be clarified and strengthened.
“A presumption might mean you take action 55 percent of the time [to disclose a vulnerability] or a presumption might mean we do it 99 percent of the time,” Swire said. “A 99 percent presumption is a much stronger presumption; it means exceptions are much less frequent…. Our recommendation was to have significantly fewer exceptions.”
The group also recommended, he said, a shift in the “equities” process—the process used to determine when a vulnerability is withheld and when it is disclosed—from the NSA to the White House, implying that until this year the NSA or the intelligence community had been the sole arbiter of decisions about the use of zero-day vulnerabilities and exploits. The review board had recommended that the National Security Council have an oversight role in this process, and Daniel confirmed to WIRED that his office now oversees the process. So it appears that although Obama didn’t publicly acknowledge the review board’s recommendations when he announced his reforms of the NSA last January, he did in fact implement their two recommendations about the government’s handling of zero days—by strengthening the default presumption for disclosing zero days and giving someone other than the NSA authority over deciding when to disclose or withhold zero days.
On how the Interagency Equities Process Works
Daniel wouldn’t go into detail about the equities process or who is involved in it other than to say “the agencies that you would expect” use a “multi-factor test” to examine vulnerabilities to determine how extensively the software is used in critical infrastructure and US government systems, and how likely it is that malicious actors have already got ahold of it or may get hold of it. “All of those questions that are laid out, we require that analysis and discuss each one of those points. Then groups of subject-matter experts across the government make a recommendation to this interagency group that I chair here on the National Security Council.” The subject-matter experts provide “their best judgment about it’s widespreadness or how likely it is that researchers are going to be able to discover it or how unlikely it is that a foreign adversary has it.”
He reiterated that the government’s default position would be to disclose but that there “are a limited set of vulnerabilities that we may need to retain for a period of time in order to conduct legitimate national security intelligence and law enforcement missions.”
He wouldn’t say what the period of time would be for withholding information about vulnerabilities to exploit them before disclosing them but said it “is not one that lasts in perpetuity. In fact the policy actually says that we must regularly review a decision to retain a vulnerability and make sure that all the factors that I mentioned before still hold.” That review, he says, happens several times a year. “So the situation may change and we may decide at that point that it’s time to actually disclose the vulnerability,” he notes.
On Stockpiling Vulnerabilities
Daniel would not say how many vulnerabilities the government has disclosed so far. But he denied that it maintains a vast repository of secret vulnerability data.
“What we can say is that the overwhelming majority of those that we find we do disclose,” he said, echoing the words Rogers had used. “The idea that we have these vast stockpiles of vulnerabilities stored up—you know, Raider’s of the Lost Ark style—is just not accurate. So the default position really is that we disclose most of the vulnerabilities that we find to the vendors. We just don’t take credit for it for a variety of reasons and have no desire to take credit for it.”
Asked if the disclosure policy also applies to zero-day vulnerabilities and exploits the government purchases from contractors and independent sellers, Daniels said it does.
“It’s difficult for me to talk about where we might find the vulnerabilities or the source of the vulnerabilities that the US government comes across because of course a lot of that is classified,” he said. “[B]ut the policy remains that our default position is going to be and our strong bias is going to be that we will disclose vulnerabilities to vendors. If you picked an economy that was digitally dependent, the United States is certainly at the top of the list, right? So it’s highly likely that we are going to face a situation where a vulnerability would be something that we would be concerned about from a network defense standpoint. So it shouldn’t be surprising that our bias is going to be towards disclosing it.”
How exactly this would work is unclear. The government doesn’t necessarily own the information and code purchased from vendors. Not every exploit sold is purchased under an exclusivity agreement. Sellers may also ask the government to sign an NDA related to the sale.
Daniel replied that it made perfect sense to purchase some vulnerabilities to disclose if, for example, the government learned that someone was peddling a vulnerability that affected a lot of critical infrastructure networks and wanted to take it off the market and get it fixed. “I’m not saying that would be the primary method or even the most desirable method, but it is certainly one that you could contemplate the US government pursuing if we thought the vulnerability was significant enough for us to try to get it patched.”
But it’s unclear how the default disclosure process applies when the government is also purchasing vulnerabilities from vendors specifically to exploit them. What would be the point of spending U.S. tax dollars on a vulnerability only to burn it by disclosing it? Daniel sidestepped the question, saying, “[T]here’s often this image that the government has spent a lot of time and effort to discover vulnerabilities that we’ve stockpiled in huge numbers and similarly that we would be purchasing very, very large numbers of vulnerabilities on the open mark, the gray market, the black market, whatever you want to call it. And I think the reality is just not nearly as stark or as interesting as that, and that the numbers are just not anywhere near what people believe they are….”
No comments:
Post a Comment