Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Economics meets cybersecurity: A light at the end of the tunnel?

public://default-article.png
N4nk3r ph3193 Security researcher
 

Modern computers and networks aren’t as secure as we’d like them to be, partly because we keep building on top of old systems instead of designing from the ground up using good security engineering principles.

They also use hardware and software that are full of bugs, and lots of these bugs cause serious security vulnerabilities. This makes using traditional risk management approaches tricky.

Here's how we may be able to do all of this better in the future, using the work of economists David Card and Joshua T. Angrist, who recently shared the 2021 Nobel prize in economics.

Measuring cybersecurity outcomes

risk is traditionally calculated as an expected value, defined as the difference between expected profits and expected costs. The risk associated with a particular event, such as a data breach, is the probability of the event happening multiplied by the loss that would result from the event. So, if a hypothetical data breach would cause $10 million in losses and has a 20% chance of happening in a given year, we can calculate the risk, often called the annual loss expectancy (ALE), by multiplying $10 million by 0.2 (the decimal value of 20%). That comes to $2 million of risk per year (0.2 x $10 million = $2 million).

This provides a handy way to measure the effectiveness of cybersecurity measures. Say we determine that the total cost of ownership (TCO) of a particular technology is $1 million per year. If it reduces the ALE of a particular event from $5 million to $3 million, we’re spending $1 million to get $2 million in benefits. It's probably a good investment.

But it’s very hard to get the accurate values that are needed to make such calculations meaningful. Are the TCO estimates that vendors use to justify their products even close to being accurate? Probably not. Are the TCO estimates that your own security department puts together any more accurate? Possibly, but maybe not. There are just too many unknowns. At best, TCOs are estimated; at worst, they are guesstimated.

And because there are so many bugs in the hardware and software that we use, the chances of having an exploitable security vulnerability are probably 100%. That's a figure you can take to the bank—and not just a high probability, but exactly 100%.

Cybersecurity tries to make the best of what is a bad situation, where TCO is basically unknowable and having a vulnerability is guaranteed. 

To estimate the value of the probability that we need to use in an ALE calculation, we need to estimate the chances that a clever hacker will take the time and effort to find one of those weaknesses and exploit it. That’s hard to do accurately. This makes this approach nothing more than a rough guideline that can be used to compare technologies or to justify investments but that might not give us useful results.

Because the numbers that we use to do ALE calculations are often just best guesses at what the real values are, the calculations are likely to have big errors. So, though we might think we’re saving $2 million by doing something, we might actually be losing $2 million, and that could easily be disguised by the inaccuracies in the values that we use to do the TCO and ALE estimates.

Is there a better way?

This is where the work of new Nobelists Card and Angrist come in. The economists were cited “for their methodological contributions to the analysis of causal relationships,” which sounds promising for our purposes.

Economics isn’t too different from cybersecurity in some ways. In both cases, we want to understand and alter the behavior of complex systems in which it’s not easy to do careful, scientific experiments. The best that we can do is to look at something that is done to tweak the system and see how things change afterwards. But because the interactions in these systems are so complicated, it’s hard to carefully and accurately account for all of the variables and how they might have affected the outcome.

In the case of economics, suppose that we change the minimum wage from a hypothetical $10 per hour to $15, and the unemployment rate subsequently increases. Does that mean the higher minimum wage caused higher unemployment? There will always be other factors and variables to consider. The economy could have gone into recession at the same time. A technological innovation might have been rolled out that year that eliminated lots of low-paying jobs.

Because it’s pretty much impossible to accurately model any possible change that might happen to an economy, the usefulness of any economic models is inherently limited when you use them to predict the results of a particular policy decision.

Similarly, with networks, it’s impossible to accurately model all of the interactions that might occur when you make a particular change to just a single component. Your upgrade to a new, bigger, better firewall might happen just before hackers launch a particularly effective attack. The attack is totally unrelated to the firewall upgrade, but nonetheless what you see is an increase in security incidents after the upgrade.

Likewise, even the simplest patch or update probably causes additional vulnerabilities; we just don’t know if these vulnerabilities will be serious or not. It’s certainly possible that an update that addresses a particular vulnerability could cause an even more serious one. Such risk-risk tradeoffs, where mitigating one risk may increase another risk, are known to exist in many situations, but because it’s not always clear exactly what risks might increase if another risk is decreased, they’re very hard to model accurately.

But it looks as if there may be tools available that can help us understand these complicated situations. I can’t claim to understand the work of Card and Angrist, but my first look at it led me to believe that their ideas for better analyzing and understanding the complex situations in economics might also be applicable to analyzing and understanding the complex situations in cybersecurity. This probably isn’t something that we can easily apply to our day-to-day jobs today, but clever researchers could turn Card and Angrist's insights into tools that we could use in the not-too-distant future.

Glimpses of the light

Even if it’s essentially impossible today to get a good understanding of how changes to networks might affect their security, it might be possible in the next 10 years or so. It can’t happen too soon. We’re probably spending too much on security technologies that don't reduce risk as much as we think, while spending too little on security technologies that could reduce risk more than we expect. With the help of Card and Angrist, we might be able to overcome that problem someday.

There may be light at the end of the dark cybersecurity tunnel after all. 

Keep learning

Read more articles about: SecurityInformation Security