Private firms spend too little money on cyber defense. A survey of cyber harms makes this clear. The FBI just last week estimated business email compromise scams alone cost $43 billion in the U.S. since 2016. This comes after Chainalysis estimated ransomware victims paid $600 million in 2021. These harms are quantifiable but fail to describe the staggering amount of intellectual property and consumer data lost in cyberspace each year.
Much of the harms we face are because the digital systems we rely on are privately owned. These include the systems that control our critical infrastructure, such as pipelines, the power grid, water treatment facilities, nuclear reactors, and telecommunications. While private firms control and protect those systems, attacks against them affect the public. Ciaran Martin calls this the “privatization of national security risk.”
Our solutions to this problem have been inadequate. The cyber insurance market is immature and struggling to quantify non-random cyber risks. Government mandates and regulatory action are usually either too prescriptive or too vague and are always slow to implement. Information sharing mechanisms are helpful, but they struggle to get private sector buy-in.
Bug bounty programs are a market-based solution that has a proven record of increasing cybersecurity. Under these programs, a company compensates a white hat hacker who discloses a vulnerability in the company’s digital infrastructure. This is done on ad hoc basis and often facilitated by third-party companies such as HackerOne. Bug bounty programs provide a standing financial incentive for hackers to identify flaws in a company’s systems, notify the company, and prompt the company to patch.
However, bug bounty programs have two major shortcomings.
When a company pays for a vulnerability, it does not have an obligation to patch it quickly, or even patch it at all.
These programs create a network of monopsony markets.
In microeconomic theory, monopsony describes a market where there is only one buyer of a good. This buyer is called a monopsonist and uses its outsized buying power to artificially decrease the price of the good. Similar to a monopoly (a market with one seller of a good), monopsony leads to an inefficient market outcome.
A company with a bug bounty program is a monopsonist in the market for its own vulnerabilities. This means that the company can set the price for the bug, artificially decreasing its price relative to its value.
To disrupt monopsony, the solution is to introduce more buyers into the market. This increases competition and moves the price of the good closer to a market efficient level. However, there is no obvious actor that policymakers would be comfortable with purchasing these vulnerabilities. This leaves the government to enter the market.
The federal government should pay for vulnerabilities in critical private sector systems. In other words, the government should compete with private companies in the markets for their vulnerabilities. Once the government is in possession of a vulnerability, it should have the ability to fine the violating company for its security lapse and mandate it patch.
The direct effect of this model would be to increase private sector spending on cybersecurity, boost the supply of hackers researching vulnerabilities, and incentivize quicker patching of systems. This model could be implemented through industry-specific regulators such as the Securities and Exchange Commission or the Federal Communications Commission, or through a centralized agency like the Cybersecurity and Infrastructure Security Agency (CISA).
To illustrate the advantages of this model over the status quo, I will discuss a hypothetical scenario in which a hacker discovers a vulnerability in a private system that helps run the power grid.
Under the status quo, a hacker approaches the electric company with a serious vulnerability that threatens the integrity of the power grid. The researcher knows that the company has a bug bounty program and will pay for this vulnerability. But the company is aware that it is the only legitimate buyer of the vulnerability. It uses its bargaining position offer $1000 for the bug. This is a low price, but the hacker is principled; she does not want to take this bug to the black market. She sells it for $1000. The company gets the vulnerability. It may patch, but it may not.
Reconsider the scenario with a version of CISA that runs a bug bounty program for critical infrastructure systems. This time, when the electric company offers $1000 for the vulnerability, the hacker goes to CISA’s bug bounty program instead. CISA offers to pay her $1500 for the vulnerability because it is in a system critical to national security. She accepts, takes the money home, and tells her hacker friends; they see her payday and decide they want to hunt for critical infrastructure vulnerabilities, too. CISA takes the bug, fines the electric provider $1600 for failing to secure its system, and directs it to patch. CISA also has the option to share the vulnerability with other companies to increase overall defense.
Now consider the next iteration of this scenario. A hacker friend of the original security researcher, who saw how much CISA paid for a critical infrastructure vulnerability, decides to probe the electric company’s systems. He identifies another vulnerability, just as serious as the last one. He takes it to the company. The company remembers that CISA would pay $1,500 for a bug like this. The company offers the hacker $1501. The hacker takes the offer and goes back to bug hunting. The vulnerability stays off the black market.
Incentive to Patch. In the first iteration of the scenario, CISA directed the electric company to patch. What guarantee is there that the company will patch in the second iteration of the scenario, when CISA is not involved?
Consider if the company decides not to patch. Two weeks after the company bought the vulnerability, a new hacker approaches the company. This hacker independently identified the same bug and wants to cash in under the bug bounty program. The company already knows about this bug and does not want to pay. But if it refuses to, the researcher will offer it to CISA, who will buy it and fine the electric company.
Foreseeing this, the electric company would instead choose to patch when it originally learned of the bug. In short, the ability of CISA to fine the company for that vulnerability means that the company always faces a strong incentive to patch.
Long Run Effects. This model produces three long-run impacts that improve security.
Third party security research increases due to the federal government entering the market. In other words, supply of known vulnerabilities increases because demand for them increases.
Companies have stronger incentives to patch once they identify a vulnerability.
Companies also have an incentive to increase their in-house cybersecurity capabilities. By doing so they would capitalize on economies of scale and decrease the amount they pay in bounties.
Overall, companies face much stronger incentives to increase their security when the federal government enters the market.
Beyond directly incentivizing companies to increase investment in cybersecurity, this model has advantages in its variability, scalability, and precision.
Variable. One major question is how the federal government decides how much to pay for vulnerabilities. This is an advantage of this model: the government’s willingness to pay is easily calibrated to the severity of the vulnerability and the entity it affects. In other words, this is a lever the government can use to affect private cybersecurity investment.
If the government wants to increase security research and investment in cybersecurity, it can increase the amount it will pay for bounties. Companies must raise their investment in response. If they do not, they will lose in the market for bugs. Similarly, if the government believes companies are overinvesting in security at the expense of innovation, it can decrease its willingness to pay for vulnerabilities. This prompts private firms to pay less and reallocate that funding toward innovation.
Scalable. The government can pilot this program. If successful, it can then scale the program to additional actors and industries. For example, the government can pilot this program with critical infrastructure companies, beginning with those companies designated under Executive Order 14636 §9. If the model is successful with those so-called Section 9 entities, the government can expand it to include more companies in the industries President Obama’s Presidential Policy Directive 21 identifies as critical.
Precise. The government can tailor this model by industry or even individual firms. Take for example a vulnerability affecting financial service companies versus one affecting internet service providers (ISPs). Assume the vulnerabilities are of equal severity. The government may pay less for the vulnerability affecting financial services companies because those companies are overinvesting in security at the expense of innovation. At the same time, the government may pay more for the ISP vulnerability if ISPs have a track record of poor security.
This model uses markets to give the government a tool to address private firm’s underinvestment in cybersecurity. It does this without clumsy policy interventions, such as mandates. It is attractive because it is variable, scalable, and precise.
Unfortunately, this model would face hurdles to implementation. Many companies would be uncomfortable competing with the federal government in a marketplace. With the lack of appetite for common sense measures like breach reporting requirements, the prospects for this unprecedented solution are slim. But in the search for tools to increase cyber defense, policymakers should not overlook the influence of market incentives. It is the language the private sector knows best.