Here’s to the Crazy Ones
As the number of security breaches grows exponentially, and attacks become cheaper and easier to execute, companies are turning to a fascinating community of hackers with ethical fiber. Unlike the malicious hackers who make headlines most often, this breed of–mostly–silent security researchers is motivated to help make the Internet safer for everyone–and make money while doing so.
Crowdsourcing a Safer Netscape
In October 1995 Netscape launched a bug bounty program for the shiny new Netscape Navigator 2.0, possibly the first ever documented program of its kind. Prizes ranged from cash to a grab bag of goodies from the Netscape “general store.” Netscape was crowdsourcing before it had a name.
The market is a lot bigger now, with many of the largest tech companies in the world offering bug bounty programs. And the stakes are higher as well, with companies like Mozilla paying as much as $10,000 for the most novel bugs. But the fundamental motivations behind them remain the same—mainly that secure coding is difficult, and finding all the holes in a site or product’s security can be nearly impossible for one security team.
A Talent Pool of Hackers
Many tech giants like Mozilla, Google, Facebook, Github, Yahoo and many others have already realized the value in bug programs and run their own. But for those without the resources to do the same, or those that have the resources but don’t want the headache, services like Bugcrowd, Crowdcurity, and HackerOne are making it easier to get help from a pool of security pros.
And while some financially conscious companies might balk at the prospect of paying tens of thousands of dollars to a hacker for a single bug, it’s likely much more attractive than the alternative. IBM’s Cost of Data Breach Study estimates the average cost of a breach at $3.4 million, a 23 percent increase since 2013.
Let the Good Guys In
Bounty programs can be hugely helpful to a short-staffed security team needing to plug security holes, but going it alone might not be the best way. According to Bugcrowd Founder and CEO Casey Ellis, companies will often try running their own bounty program, then end up turning to a service instead. Ellis says one reason this tends to happen partly because it’s easy to underestimate the overhead of running a program, and the community management aspect of it. “If you get that wrong and end up with a cranky researcher, you’ll have a problem,” he said.
Pinterest was one such company that tried going it alone, but quickly became overwhelmed when responses began flowing in, and the social site’s security team enlisted Bugcrowd for help.
Ellis said the need for programs in general is on the rise, because with security talent being scarce, and the sheer effort it takes to put out a good product, security is not always top of mind for developers. “Programmers are not really incentivized to program securely,” he said. “It’s really a quality issue, at the end of the day. And quality is hard.”
Even Facebook, the modern engineering powerhouse it is, acknowledged this fact. The company’s bounty update from 2013 cites the challenge clearly, “…no matter how much we invest in security–and we invest a lot–we’ll never have all the world’s smartest people on our team and we’ll never be able to think of all the different ways a system as complex as ours might be vulnerable.”
LinkedIn has adopted a different approach to help with the noise and quality factors in running a public program, by making it private. The professional networking company announced in a blog post on June 17 that its security team has been running a private bounty program since October 2014 with a community of selected researchers. While the security team still takes bug reports from the public, LinkedIn’s Director of Information Security Cory Scott said that he believes the private model is more apt for a bounty program.
“We’ve seen the signal-to-noise ratio of public bug bounty programs continue to degrade,” Scott wrote, “requiring companies to hire dedicated resources, engage consultancies, or use their platform vendor to sift through all of the bug reports to find actionable ones. The spirit of the original bounty programs have been co-opted, and I think an invitation-based model helps bring it back to the original intent.”
Do It Right: Prepare, Respond, and Reward
Like any community-centric venture, there are right ways to successfully run a bug bounty program, and mistakes that can lead to adverse responses from the research community.
Brian Carpenter, a freelance security and malware researcher with decades of experience, knows the nuances of the research community well. He’s a recognized Google and Mozilla hall of fame bug reporter, and has negotiated with a number of companies of varying size and notoriety in reporting bugs. Similar to advice from Bugcrowd’s Ellis, Carpenter says a company’s success with a bounty program can be linked to its preparation, part of which is clearly defining what is “in scope.”
“Some of these companies really have no idea what they are getting into, or how badly their networks and websites are coded,” he said. “Quite a few companies have been so overwhelmed by reports that they’ve had to stop taking reports altogether.”
A company’s response to reported bugs is often the key turning point between an unknown exploit, and a headline-grabbing security leak. One high profile example goes back to 2013 when a then-unknown Pakistani hacker compromised Mark Zuckerberg’s personal Facebook profile to prove a point when his official bug reports went unanswered.
But it’s not just the large tech companies that have made this mistake. In January of 2015, a personal greeting card company, UK-based MoonPig, was outed for a litany of security flaws in its API, which exposed personally identifiable information of its three million users. The developer who found the flaw, Paul Price, wrote in his blog post that he contacted the company 17 months prior to posting about the flaw publicly but received no response after multiple attempts.
Not all developers and researchers are so inclined to do the right thing when they feel slighted, unacknowledged, or don’t get paid. Carpenter says for some hackers, it can be a more enticing option to sell a flaw to other hackers who would rather keep it silent and exploit it.
“When you’re looking at receiving $10,000 in Bitcoin via the black market compared to a T-shirt and a firm handshake from a Fortune 100 company, you can see why some folks would do that,” Carpenter said. Though he said his long relationship with law enforcement (Carpenter is a former police officer himself) keeps the temptation out of the question for him personally.
If A Duplicate, Show Your Work
A big pet peeve throughout the bug research community is when companies running a bounty program claim that a flaw that was found is a “dupe” without proving it. Duplicates are a common issue on both sides of a bounty program transaction. A security researcher finds something that looks like gold, only to hear from the company that it’s already been found, and thanks for trying.
It’s an issue Vimeo CTO Andrew Pile knows well. “There is a huge amount of trust involved,” he said in an interview with The Verge. “[Researchers] spend a ton of time identifying and documenting these issues, and then the report goes into a black box. I closed out a significant number that were duplicates, and unfortunately we can only pay on a first come, first serve basis.”
The frustration could be avoided, Carpenter says, if more companies provided proof that a report is a duplicate. “They just expect you to believe them. I can’t prove that they are doing it to get out of paying, but a reputable company would provide you access to the other bug reports upon request.”