The window of time between vulnerability discovery and its subsequent patch can be a quiet, calm process of orchestrating patches across organizations or a mad rush to plug a hole that’s actively being exploited in the wild. It all depends on how vulnerabilities are disclosed and how quickly vendors act to fix the problem. Accordingly, zero day vulnerabilities can be a nightmare for defenders and a dream for attackers.
One of the fundamental tensions in the information security industry has been the competing philosophies between vendors and the independent research community when it comes to vulnerability disclosure. Generally, researchers and vendors are driven by a common desire for safer and more secure software. But the ground rules under which collaboration happens still leave much to be desired.
Going Full Disclosure
Even before security researcher Rain Forest Puppy codified the first “full disclosure” policy, the relationship between security researchers and software vendors has been strained by conflicting motivations, legal threats and accusations of disregard for the security of users.
Historically, vendors have claimed that vulnerabilities should be disclosed after a patch is already issued (so-called “responsible” disclosure). This way, they can ensure that their customers are fully protected. On the more enlightened end of the spectrum, vendors proactively engage with researchers who report vulnerabilities, giving them credit and working with them to swiftly fix the problem. However, a cursory look at Attrition.org’s long list of lawsuits and takedown notices targeting security researchers shows that many vendors are not so enlightened. As Wired reports, even security vendors who regularly disclose vulnerabilities in other companies’ products have sued researchers in the past, highlighting the complicated nature of disclosure, even within the information security industry. A tone deaf blog post by Oracle CSO Mary Ann Davidson from earlier this year highlighted this complexity, essentially warning security professionals that they could be sued for reviewing Oracle code for security purposes–a common practice among application security engineers. The common complaint by companies bringing these lawsuits is that their customers’ safety is jeopardized or that intellectual property is compromised by research that exposes design flaws.
The “full disclosure” movement was born out of both the legal risk faced by researchers working with vendors, the snail’s pace at which many serious software issues were being fixed and (to some extent) the lack of exposure issues received following private (or “responsible) disclosure. Full disclosure occurs when a researcher publicly posts information about a vulnerability to the general research community, a “name and shame” tactic to force a vendor to patch a publicly known flaw. The term is also the inspiration for the infamous Full Disclosure mailing-list, where several notable zero day vulnerabilities have been announced over the years.
Full disclosure is controversial, because it can essentially lead to a race between vendors and online criminals–the first group has to race to put out a patch, while the latter group is sometimes working to leverage the fresh exploit in attacks out in the wild. Even if vendors get their patch out promptly, many end users still don’t patch their machines on a regular basis. By some accounts, nearly half of attacks leverage vulnerabilities that are two years old or more, meaning that criminals are still profiting by exploiting these known vulnerabilities. In the most extreme circumstances, full disclosure can be the equivalent of dropping a zero day vulnerability on an unsuspecting user base.
To complicate things further, there are no universal standards for full disclosure. While most researchers give vendors a time limit to respond privately and collaborate on a fix, others have been burned by vendors in the past, and are less collaborative. As independent security analyst Richard Stennion writes in Forbes, a lack of a standard framework for how to disclose vulnerabilities can lead to controversy when vulnerabilities like Heartbleed, which impacted millions of servers and devices around the world, are revealed. Before Heartbleed was disclosed, information about the bug leaked out through various security mailing lists and informal networks of researchers, leading to a pronounced difference in response between the companies that were “in the know” and those who found out when the bug was publicly revealed.
Coordination vs. Anti-Collaboration
When one vendor discloses a vulnerability in another’s product, things can get even more tricky. While full disclosure is often used to protect researchers from lawsuits that would prevent disclosing a vulnerability, vendors are typically able to shield their own employees from legal action. Instead, cross-company disclosures are typically coordinated to improve the security of mutual customers. There’s a long history of outside vendors working with Adobe and Microsoft on the coordinated disclosure of vulnerabilities in software that is generally considered ubiquitous enterprise environments. The update to Microsoft’s Coordinated Disclosure Policy published on the company’s blog earlier this year is a manifesto of sorts, calling for a better working relationship between companies on this issue.
But “name and shame” can still be a motivation for disclosure when companies act as rivals. As eWeek‘s Sean Michael Kerner reported in January of this year, Google’s regular disclosure of Microsoft zero day flaws–some of which were already patched–has raised some eyebrows about the lack of communication between the two companies. In another instance, FireEye’s announcement that malware targeting routers built by OpenDNS parent company Cisco Systems was marketed in a similar way to zero day vulnerabilities like Heartbleed, but requires administrator access in order to install on targeted systems… essentially requiring that they already be compromised before the malware could be installed. This marketing effort, after Cisco had already warned users about such attacks a month prior, highlights that disclosures between companies can sometimes be more smoke than fire.
A Way Forward
Even with the loaded history of vulnerability disclosure between vendors and researchers, there are some recent indications that progress is being made. Starting in 2007, the Pwn to Own contest at CanSecWest has slowly evolved into the infosec community’s own equivalent of a game show, where researchers work to trump each other with zero day vulnerabilities against some of today’s most well-known software. Increased media attention at shows like Black Hat and DEF CON has elevated some security researchers to rockstar-like levels of notoriety. Indeed, researchers like Metasploit creator H.D. Moore (who once hosted a “Month of Browser Bugs” on his personal blog, dropping over a dozen zero days in less than 30 days) now are infosec industry executives themselves.
As the industry has gradually moved away from lawsuits to stop disclosures, a variety of efforts have sprung up to foster a more healthy working relationship between vendors and researchers. The increasing prevalence of bug bounties issued by firms large and small and offering incentives for vulnerability reporting is itself a seismic shift in the way companies have begun to change their stance from reactive to proactive. In 2014, bug bounty firm Bugcrowd announced the first open source framework for vulnerability disclosure, developed alongside legal experts from CipherLaw. This framework provides a way for companies to proactively extend legal protection to researchers and to encourage them to engage directly with vendors in coordinated disclosure.
But even as the industry moves to accept (and maybe, embrace) vulnerability disclosures as a valuable tool to increase software security, legal protection for researchers may be the next hurdle for the industry to tackle. As reported by Kim Zetter in WIRED, the National Telecommunications and Information Administration (NTIA) recently held its first-ever meeting on the subject vulnerability disclosure. While the meeting itself was a sign of progress, attendees highlighted how laws like the DMCA may ultimately limit the ability of security researchers to do their jobs.