There has been a lot of coverage in the media recently concerning the Pegasus spyware and the zero-click exploits that are starting to emerge. Public disclosure and discussion around these exploits have resulted in both common vulnerabilities and exposures (CVEs) being created and eventual patches from the affected vendors. This latest news adds urgency to a question I’ve been thinking about for a while: What’s the best model to encourage the rapid disclosure of vulnerabilities so parties can mitigate risk faster?

If you’ve been in cybersecurity as long as I have, you may be familiar with a security mailing list called “full disclosure.” In its heyday, the purpose was to serve as a place where researchers could publish their findings when they discovered a vulnerability, including the source of the vulnerability and exploitation techniques. The idea was that the faster these vulnerabilities or weaknesses could be shared within the community, the faster people could secure whatever they were responsible for securing. At the time, I was working on a security product that was predicated on being able to detect threats. Having early access to these findings allowed us to create rules to detect malicious activity faster and better serve our customers.

Over time, the approach to vulnerability disclosure started to morph into what I’ll call “responsible disclosure.” When a researcher would come across vulnerabilities the expected course of action was to contact the vendor of the vulnerable product, make them aware of the issue and agree on a reasonable timeframe for them to address it. After the vendor officially issued a patch or recommended a compensating control, the researcher could release their findings. If the vendor did not address the vulnerability within the agreed upon period of time, the researcher was free to disclose their findings publicly. This approach worked fairly well because vendors had a chance to take corrective action before a weakness was widely known, but their feet were still held to the fire to inform and safeguard users.

Fast-forward to the rise in digitization and the unintended consequences of an explosion in cybercrime, and the need for disclosure is stronger than ever. At a time when it is critical that infosec professionals and consumers understand threats and vulnerabilities, they are being kept in the dark. Findings are no longer shared openly. Instead, the bug bounty phenomenon is proliferating, pumping more than $40 million into hackers’ wallets in 2020 alone, according to bug bounty operator HackerOne. That’s a rise of 143% since HackerOne last reported this data in 2018.

Private companies offer bug bounty programs as a way to attract researchers to help them better secure their own products, which sounds great in principle. But here’s where things can go awry. If, after evaluation, the company determines not to address the vulnerability for business reasons, it can choose to sweep the problem under the rug. There is no incentive for the company to fix its product, so users are left exposed.

Another problematic aspect with bug bounty programs is that there is a good chance that the researcher is not the only person to have found this vulnerability. The more nefarious person is either selling their findings or creating an exploit for the vulnerability and selling that on the Dark Web — making it even easier for others to leverage faster. Vulnerability disclosure programs devalue hackers’ products because they no longer have a zero-day to sell. Everyone knows about it and security practitioners and vendors can start writing rules and signatures and developing other methodologies to detect and prevent an exploit. Substituting bug bounty programs for vulnerability disclosure programs can keep vulnerabilities alive for longer, if not indefinitely.

The increase in activity by law enforcement to actively bring down perpetrators adds additional complexity to the discussion. For years, there has been the view that vulnerability disclosure programs can thwart law enforcement activity by jeopardizing a case, where the objective is to protect against nation-state actors by gathering evidence and seeking attribution. But unless there’s a belief that the weakness or vulnerability is being leveraged as part of a major crime spree, there isn’t much value for most companies in tracking criminals. If we think about what CISOs and those of us who work on their behalf care about, it’s mitigating risk to the enterprise. Sharing information about vulnerabilities and weaknesses in products enables and accelerates this.

So, where do we go from here? I suggest we pull back the curtain on bug bounty programs. Let’s start a discussion about the pros and cons of going directly to a corporation and handing over research for a reward, versus disclosing findings in a community. A lot has changed since the full disclosure mailing list was launched nearly 20 years ago. Including business models that utilize undisclosed exploits as a product with the unintended consequence of facilitating nefarious operations. Let’s explore the behaviors we want to encourage, the guardrails that should be in place, and how we define that community — unvetted and completely open or vetted in some way.

There’s got to be a happy medium. What do you think?

This post was originally published on this site