Cyber risk represents one of the fastest-growing opportunities in the insurance marketplace. The cyber insurance market is set to grow at a compound annual growth rate (CAGR) of 25 percent between 2022 and 2028, to reach a market size of US$28 billion. This outpaces the growth in other insurance lines and represents a net new (and diversifying) revenue stream.
But the nascent cyber market is different from long-established natural catastrophe peril markets, which are well understood in terms of loss experience and is underpinned by trusted analytics from catastrophe models.
Unlike natural catastrophe risk diversification, where insurers can diversify risk by geography or peril type, the cyber landscape is both too complex, and ubiquitous, while being highly uncertain.
Cyber risk diversification cannot be meaningfully achieved just through thoroughly cataloging a client’s technology stack and, most critically, its software.
Despite the attractiveness of the cyber insurance market, insurers have been cautious and slow to jump in and take advantage of its growth potential because the market presents unique challenges.
First, cyber risk is inherently uncertain. Unlike natural catastrophes, the threat actors and the defenders are human, and the “battlefield” on which they play – corporate IT networks – are both highly complex and highly individual.
Second, to address the uncertainty, insurance companies have looked to deviate from traditional catastrophe modeling approaches and instead gravitated towards scenario-based modeling. Rather than looking at cyber risk through a probabilistic framework to establish the totality of the risk, the industry tends to consider highly specific scenarios, such as outages from specific cloud providers, or a very particular type of ransomware attack.
But cyber risk has too many “free surfaces” and variables for a deterministic or scenario-based modeling approach to have any sufficient meaning. Insurers are relying on scenario-based modeling to achieve too many things, using different frameworks with different granularities of data and results – often at cross purposes. As a result, the cyber modeling industry has become somewhat confused.
Quantifying Cyber Risk: Today’s Shortcomings
It’s worth going back to basics to the fundamental disciplines required for managing cyber risk in the insurance industry: risk selection and risk modeling. We will examine them and how they are currently applied.
As the name implies, risk selection is the art of trying to select (or at least recognize) more attractive risks. In today’s cyber insurance market, this typically involves a client completing long questionnaires, so underwriters can get a better handle on the potential risk, and understanding a client’s IT network using external data sources, such as outside-in scans.
This latter approach has recently become a very important part of the underwriting process. An outside-in scan can easily highlight issues such as open ports on a network, which due to the recent ransomware “pandemic” is typically a no-go area for underwriters.
Like a standard actuarial modeling approach, acquiring data about a client’s network only really gives an objective rearview-mirror perspective. It describes how a client manages today’s risk but doesn’t predict how they will manage future risks, or even what future risk they will be susceptible to.
This outside-in scan, combined with a strong desire to make complex problems simple, can lead us to believe that models can be produced to aid risk pricing as well as portfolio measurement (i.e., portfolio diversification, tail metrics, capital allocation, and reinsurance transactions) – all based on specific deterministic and explicit scenarios. This ignores the fact that cyber risk is always evolving, for example in terms of the volume of attacks, targets, or types of attacks.
The other fundamental discipline required for managing cyber risk is cyber risk modeling. The principles are similar to natural catastrophe risk; insurers want to apply the best science and robust methodologies to quantify risk. Typically, this has two elements: to develop technical pricing approaches and understand portfolio/catastrophe risk. For the latter, it’s how tail risk is measured, to achieve portfolio diversification, align capital to risk, and transact reinsurance.
Diversification can be achieved using industry, company revenue, and country parameters. Different company sizes and industries use different types of software. Clients in different countries use different software providers, and the size of a company tends to correlate with its IT maturity. However, we must be careful. In the far tail, attacks will leverage vulnerabilities in operating systems or cross-operating system platforms – therefore diversification will be small.
Applying Catastrophe Modeling Principles to Cyber Risk
Cyber insurers want to use science to assess cyber risk, but how can we apply tried and tested catastrophe modeling principles to cyber risk? To use an imperfect analogy, in nat cat modeling a category 4 strength hurricane over the Atlantic will probably make landfall (eventually), but the level of portfolio losses will not greatly vary if its precise landfall differs by a hundred yards.
Even a landfalling location off by more than this will still have limited consequences for a portfolio. Therefore, from a scenario perspective, a cat model based on event sets where a landfall location varies only slightly isn’t needed because the law of large numbers is how models work and provide value.
Applying the same approach to cyber risk, our analogies need to be drawn at a sensible granularity – and this is where many have failed to understand the problem. If a cyber model were to take a deterministically driven scenario approach, the number of scenarios needed to reasonably parameterize the catastrophe risk would be in the billions – if not higher. Each piece of software creates risk, but this is compounded by the interactions between software. If there are thousands of pieces of software, there will be billions of potential interactions.
The vulnerability is a product of the software, and once a vulnerability is disclosed, companies look to mitigate and/or patch it. However, the response is nonuniform and time dependent. If a million companies are exposed to a vulnerability on day zero, that number will rapidly decrease over the coming weeks and months depending on the seriousness of the vulnerability, secondary implications of the mitigation, and the individual company’s approach to risk.
Analyzing this vulnerability is not a problem. However, models work best following the law of large numbers, and the insurance industry works best when we think about entire portfolios of risk – and when highly specific location-level uncertainty is “smoothed over.”
Introducing RMS Cyber Solutions 6.0
Models are what we use to solve these problems, but they are only useful if they are actually designed to do this. At RMS®, we have met similar challenges before and it’s what motivates us.
In our upcoming RMS Cyber Solutions Version 6.0 release, we are revamping our approach to modeling cyber risk to help deliver:
Better portfolio diversification: Our industry, revenue, and country parameterizations utilize more specific market-share data, i.e., data to establish which data buckets use what types of software and the market size of those software types. We’ve added two new revenue bands, including US$1 million of revenue, and built two micro profiles. Together, these new parameters help modelers build portfolios of risk that are less likely to have correlated cyber risk.
Improved risk differentiation: Our event set has increased by roughly 100 times, allowing for scenarios to be parameterized to more specific but still meaningful risk differentiation. The model then runs off a Monte Carlo simulation, using these market-share insights to produce meaningful models based on a tractably sized event set. This means we can represent the entirety of the risk space without drowning in spuriously parameterized event sets.
Clearer understanding of risk accumulations: A bigger and more diverse event set allows users to better understand and apply their own views of risk to policies. Events can be resolved to a finer granularity, and hence, describe the events themselves better. This then lets the user better communicate the risk to the portfolio and avoid misleading or meaningless precision
At RMS, we are wary of precise scenarios, spurious data, and highly specific loss attribution. All our cyber models are parameterized at the industry, revenue, and country level, and use market-share-level data – as we look to produce meaningful and robust risk models not parameterized to unachievable and meaningless levels of precision.
There is so much to explore, learn more about our cyber modeling capabilities and RMS Cyber Solutions Version 6.