logo image
More Topics

Reset Filters

NIGEL ALLEN
link
May 05, 2020
Breaking Down the Pandemic

As COVID-19 has spread across the world and billions of people are on lockdown, EXPOSURE looks at how the latest scientific data can help insurers better model pandemic risk The coronavirus disease 2019 (COVID-19) was declared a pandemic by the World Health Organization (WHO) on March 11, 2020. In a matter of months, it has expanded from the first reported cases in the city of Wuhan in Hubei province, China, to confirmed cases in over 200 countries around the globe. At the time of writing, approximately one-third of the world’s population is in some form of lockdown, with movement and activities restricted in an effort to slow the disease’s spread. The transmissibility of COVID-19 is truly global, with even the extreme remoteness of location proving no barrier to its relentless progression as it reaches far-flung locations such as Papua New Guinea and Timor-Leste. After declaring the event a global pandemic, Dr. Tedros Adhanom Ghebreyesus, WHO director general, said: “We have never before seen a pandemic sparked by a coronavirus. This is the first pandemic caused by a coronavirus. And we have never before seen a pandemic that can be controlled. … This is not just a public health crisis, it is a crisis that will touch every sector — so every sector and every individual must be involved in the fight.” Ignoring the Near Misses COVID-19 has been described as the biggest global catastrophe since World War II. Its impact on every part of our lives, from the mundane to the complex, will be profound, and its ramifications will be far-reaching and enduring. On multiple levels, the coronavirus has caught the world off guard. So rapidly has it spread that initial response strategies, designed to slow its progress, were quickly reevaluated and more restrictive measures have been required to stem the tide. Yet, some are asking why many nations have been so flat-footed in their response. To find a comparable pandemic event, it is necessary to look back over 100 years to the 1918 flu pandemic, also referred to as Spanish flu. While this is a considerable time gap, the interim period has witnessed multiple near misses that should have ensured countries remained primed for a potential pandemic. “For very good reasons, people are categorizing COVID-19 as a game-changer. However, SARS in 2003 should have been a game-changer, MERS in 2012 should have been a game-changer, Ebola in 2014 should have been a game-changer. If you look back over the last decade alone, we have seen multiple near misses.” Dr. Gordon Woo RMS However, as Dr. Gordon Woo, catastrophist at RMS, explains, such events have gone largely ignored. “For very good reasons, people are categorizing COVID-19 as a game-changer. However, SARS in 2003 should have been a game-changer, MERS in 2012 should have been a game-changer, Ebola in 2014 should have been a game-changer. If you look back over the last decade alone, we have seen multiple near misses. “If you examine MERS, this had a mortality rate of approximately 30 percent — much greater than COVID-19 — yet fortunately it was not a highly transmissible virus. However, in South Korea a mutation saw its transmissibility rate surge to four chains of infection, which is why it had such a considerable impact on the country.” While COVID-19 is caused by a novel virus and there is no preexisting immunity within the population, its genetic makeup shares 80 percent of the coronavirus genes that sparked the 2003 SARS outbreak. In fact, the virus is officially titled “severe acute respiratory syndrome coronavirus 2,” or “SARS-CoV-2.” However, the WHO refers to it by the name of the disease it causes, COVID-19, as calling it SARS could have “unintended consequences in terms of creating unnecessary fear for some populations, especially in Asia which was worst affected by the SARS outbreak in 2003.” “Unfortunately, people do not respond to near misses,” Woo adds, “they only respond to events. And perhaps that is why we are where we are with this pandemic. The current event is well within the bounds of catastrophe modeling, or potentially a lot worse if the fatality ratio was in line with that of the SARS outbreak. “When it comes to infectious diseases, we must learn from history. So, if we take SARS, rather than describing it as a unique event, we need to consider all the possible variants that could occur to ensure we are better able to forecast the type of event we are experiencing now.” Within Model Parameters A COVID-19-type event scenario is well within risk model parameters. The RMS® Infectious Diseases Model within its LifeRisks®platform incorporates a range of possible source infections, which includes coronavirus, and the company has been applying model analytics to forecast the potential development tracks of the current outbreak. Launched in 2007, the Infectious Diseases Model was developed in response to the H5N1 virus. This pathogen exhibited a mortality rate of approximately 60 percent, triggering alarm bells across the life insurance sector and sparking demand for a means of modeling its potential portfolio impact. The model was designed to produce outputs specific to mortality and morbidity losses resulting from a major outbreak. In 2006, H5N1 exhibited a mortality rate of approximately 60 percent, triggering alarm bells across the life insurance sector and sparking demand for a means of modeling its potential portfolio impact The probabilistic model is built on two critical pillars. The first is modeling that accurately reflects both the science of infectious disease and the fundamental principles of epidemiology. The second is a software platform that allows firms to address questions based on their exposure and experience data. “It uses pathogen characteristics that include transmissibility and virulence to compartmentalize a pathological epidemiological model and estimate an abated mortality and morbidity rate for the outbreak,” explains Dr. Brice Jabo, medical epidemiologist at RMS. “The next stage is to apply factors including demographics, vaccines and pharmaceutical and non-pharmaceutical interventions to the estimated rate. And finally, we adjust the results to reflect the specific differences in the overall health of the portfolio or the country to generate an accurate estimate of the potential morbidity and mortality losses.” The model currently spans 59 countries, allowing for differences in government strategy, health care systems, vaccine treatment, demographics and population health to be applied to each territory when estimating pandemic morbidity and mortality losses. Breaking Down the Virus In the case of COVID-19, transmissibility — the average number of infections that result from an initial case — has been a critical model parameter. The virus has a relatively high level of transmissibility, with data showing that the average infection rate is in the region of 1.5-3.5 per initial infection. However, while there is general consensus on this figure, establishing an estimate for the virus severity or virulence is more challenging, as Jabo explains: “Understanding the virulence of the disease enables you to assess the potential burden placed on the health care system. In the model, we therefore track the proportion of mild, severe, critical and fatal cases to establish whether the system will be able to cope with the outbreak. However, the challenge factor is that this figure is very dependent on the number of tests that are carried out in the particular country, as well as the eligibility criteria applied to conducting the tests.” An effective way of generating more concrete numbers is to have a closed system, where everyone in a particular environment has a similar chance of contracting the disease and all individuals are tested. In the case of COVID-19 these closed systems have come in the form of cruise ships. In these contained environments, it has been possible to test all parties and track the infection and fatality rates accurately. Another parameter tracked in the model is non-pharmaceutical intervention — those measures introduced in the absence of a vaccine to slow the progression of the disease and prevent health care systems from being overwhelmed. Suppression strategies are currently the most effective form of defense in the case of COVID-19. They are likely to be in place in many countries for a number of months as work continues on a vaccine. “This is an example of a risk that is hugely dependent on government policy for how it develops,” says Woo. “In the case of China, we have seen how the stringent policies they introduced have worked to contain the first wave, as well as the actions taken in South Korea. There has been concerted effort across many parts of Southeast Asia, a region prone to infectious diseases, to carry out extensive testing, chase contacts and implement quarantine procedures, and these have so far proved successful in reducing the spread. The focus is now on other parts of the world such as Europe and the Americas as they implement measures to tackle the outbreak.” The Infectious Diseases Model’s vaccine and pharmaceutical modifiers reflect improvements in vaccine production capacity, manufacturing techniques and the potential impact of antibacterial resistance. While an effective treatment is, at time of writing, still in development, this does allow users to conduct “what-if” scenarios. “Model users can apply vaccine-related assumptions that they feel comfortable with,” Jabo says. “For example, they can predict potential losses based on a vaccine being available within two months that has an 80 percent effectiveness rate, or an antiviral treatment available in one month with a 60 percent rate.” Data Upgrades Various pathogens have different mortality and morbidity distributions. In the case of COVID-19, evidence to date suggests that the highest levels of mortality from the virus occur in the 60-plus age range, with fatality levels declining significantly below this point. However, recent advances in data relating to immunity levels has greatly increased our understanding of the specific age range exposed to a particular virus. “Recent scientific findings from data arising from two major flu viruses, H5N1 and A/H7N9, have had a significant impact on our understanding of vulnerability,” explains Woo. “The studies have revealed that the primary age range of vulnerability to a flu virus is dependent upon the first flu that you were exposed to as a child. “There are two major flu groups to which everyone would have had some level of exposure at some stage in their childhood. That exposure would depend on which flu virus was dominant at the time they were born, influencing their level of immunity and which type of virus they are more susceptible to in the future. This is critical information in understanding virus spread and we have adapted the age profile vulnerability component of our model to reflect this.” Recent model upgrades have also allowed for the application of detailed information on population health, as Jabo explains: “Preexisting conditions can increase the risk of infection and death, as COVID-19 is demonstrating. Our model includes a parameter that accounts for the underlying health of the population at the country, state or portfolio level. “The information to date shows that people with co-morbidities such as hypertension, diabetes and cardiovascular disease are at a higher risk of death from COVID-19. It is possible, based on this data, to apply the distribution of these co-morbidities to a particular geography or portfolio, adjusting the outputs based on where our data shows high levels of these conditions.” Predictive Analytics The RMS Infectious Diseases Model is designed to estimate pandemic loss for a 12-month period. However, to enable users to assess the potential impact of the current pandemic in real time, RMS has developed a hybrid version that combines the model pandemic scenarios with the number of cases reported. “Using the daily cases numbers issued by each country,” says Jabo, “we project forward from that data, while simultaneously projecting backward from the RMS scenarios. Using this hybrid approach, it allows us to provide a time-dependent estimate for COVID-19. In effect, we are creating a holistic alignment of observed data coupled with RMS data to provide our clients with a way to understand how the evolution of the pandemic is progressing in real time.” Aligning the observed data with the model parameters makes the selection of proper model scenarios more plausible. The forward and backward projections, as illustrated, not only allow for short-term projections, but also forms part of model validation and enables users to derive predictive analytics to support their portfolio analysis. “Staying up to date with this dynamic event is vital,” Jabo concludes, “because the impact of the myriad government policies and measures in place will result in different potential scenarios, and that is exactly what we are seeing happening.”

Helen Yates
link
September 06, 2019
Insurance: The next 10 years

Mohsen Rahnama, Cihan Biyikoglu and Moe Khosravy of RMS look to 2029, consider the changes the (re)insurance industry will have undergone and explain why all roads lead to a platform Over the last 30 years, catastrophe models have become an integral part of the insurance industry for portfolio risk management. During this time, the RMS model suite has evolved and expanded from the initial IRAS model  — which covered California earthquake — to a comprehensive and diverse set of models covering over 100 peril-country combinations all over the world.  RMS Risk Intelligence™, an open and flexible platform, was recently launched, and it was built to enable better risk management and support profitable risk selection. Since the earliest versions of catastrophe models, significant advances have been made in both technology and computing power. These advances allow for a more comprehensive application of new science in risk modeling and make it possible for modelers to address key sources of model and loss uncertainty in a more systematic way.  These and other significant changes over the last decade are shaping the future of insurance. By 2029, the industry will be fully digitized, presenting even more opportunity for disruption in an era of technological advances. In what is likely to remain a highly competitive environment, market participants will need to differentiate based on the power of computing speed and the ability to mine and extract value from data to inform quick, risk-based decisions. Laying the Foundations So how did we get here? Over the past few decades we have witnessed several major natural catastrophes including Hurricanes Andrew, Katrina and Sandy; the Northridge, Kobe, Maule, Tōhoku and Christchurch Earthquakes; and costly hurricanes and California wildfires in 2017 and 2018. Further, human-made catastrophes have included the terrorist attacks of 9/11 and major cyberattacks, such as WannaCry and NotPetya.  Each of these events has changed the landscape of risk assessment, underwriting and portfolio management. Combining the lessons learned from past events, including billions of dollars of loss data, with new technology has enhanced the risk modeling methodology, resulting in more robust models and a more effective way to quantify risk across diverse regions and perils. The sophistication of catastrophe models has increased as technology has enabled a better understanding of root causes and behavior of events, and it has improved analysis of their impact. Technology has also equipped the industry with more sophisticated tools to harness larger datasets and run more computationally intensive analytics. These new models are designed to translate finer-grained data into deeper and more detailed insights. Consequently, we are creating better models while also ensuring model users can make better use of model results through more sophisticated tools and applications.  A Collaborative Approach In the last decade, the pace at which technology has advanced is compelling. Emerging technology has caused the insurance industry to question if it is responding quickly and effectively to take advantage of new opportunities. In today’s digital world, many segments of the industry are leveraging the power and capacity enabled by Cloud-computing environments to conduct intensive data analysis using robust analytics.  Technology has also equipped the industry with more sophisticated tools to harness larger datasets Such an approach empowers the industry by allowing information to be accessed quickly, whenever it is needed, to make effective, fully informed decisions. The development of a standardized, open platform creates smooth workflows and allows for rapid advancement, information sharing and collaboration in growing common applications.   The future of communication between various parties across the insurance value chain — insurers, brokers, reinsurers, supervisors and capital markets — will be vastly different from what it is today. By 2029, we anticipate the transfer of data, use of analytics and other collaborations will be taking place across a common platform. The benefits will include increased efficiency, more accurate data collection and improvements in underwriting workflow. A collaborative platform will also enable more robust and informed risk assessments, portfolio rollout processes and risk transfers. Further, as data is exchanged it will be enriched and augmented using new machine learning and AI techniques. An Elastic Platform We continue to see technology evolve at a very rapid pace. Infrastructure continues to improve as the cost of storage declines and computational speed increases. Across the board, the incremental cost of computing technology has come down.  Software tools have evolved accordingly, with modern big data systems now capable of handling hundreds if not thousands of terabytes of data. Improved programming frameworks allow for more seamless parallel programming. User-interface components reveal data in ways that were not possible in the past. Furthermore, this collection of phenomenal advances is now available in the Cloud, with the added benefit that it is continuously self-improving to support growing commercial demands. In addition to helping avoid built-in obsolescence, the Cloud offers “elasticity.” Elasticity means accessing many machines when you need them and fewer when you don’t. It means storage that can dynamically grow and shrink, and computing capacity that can follow the ebb and flow of demand.  In our world of insurance and data analytics, the macro cycles of renewal seasons and micromodeling demand bursts can both be accommodated through the elastic nature of the Cloud. In an elastic world, the actual cost of supercomputing goes down, and we can confidently guarantee fast response times.  Empowering Underwriters A decade from now, the industry will look very different, not least due to changes within the workforce and the risk landscape. First-movers and fast-followers will be in a position of competitive advantage come 2029 in an industry where large incumbents are already partnering with more agile “insurtech” startups.  The role of the intermediary will continue to evolve, and at every stage of risk transfer — from insured to primary insurer, reinsurer and into the capital markets — data sharing and standardization will become key success factors. Over the next 10 years, as data becomes more standardized and more widely shared, the concept of blockchain, or distributed ledger technology, will move closer to becoming a reality.  This standardization, collaboration and use of advanced analytics are essential to the future of the industry. Machine learning and AI, highly sophisticated models and enhanced computational power will enable underwriters to improve their risk selection and make quick, highly informed decisions.  And this ability will enhance the role of the insurance industry in society, in a changing and altogether riskier world. The tremendous protection gap can only be tackled when there is more detailed insight and differentiation around each individual risk. When there is greater insight into the underlying risk, there is less need for conservatism, risks become more accurately and competitively priced, and (re)insurers are able to innovate to provide products and solutions for new and emerging exposures.  Over the coming decade, models will require advanced computing technology to fully harness the power of big data. Underwater robots are now probing previously unmapped ocean waters to detect changes in temperatures, currents, sea level and coastal flooding. Drones are surveying our built-up environment in fine detail. Artificial intelligence and machine learning algorithms are searching for patterns of climate change in these new datasets, and climate models are reconstructing the past and predicting the future at a resolution never before possible. These emerging technologies and datasets will help meet our industry’s insatiable demand for more robust risk assessment at the level of an individual asset. This explosion of data will fundamentally change the way we think about model execution and development, as well as the end-to-end software infrastructure. Platforms will need to be dynamic and forward-looking verses static and historic in the way they acquire, train, and execute on data. The industry has already transformed considerably over the past five years, despite traditionally being considered a laggard in terms of its technology adoption. The foundation is firmly in place for a further shift over the next decade where all roads are leading to a common, collaborative industry platform, where participants are willing to share data and insights and, as they do so, open up new markets and opportunities.  RMS Risk Intelligence The analytical and computational power of the Risk Intelligence (RI) platform enables the RMS model development team to bring the latest science and research to the RMS catastrophe peril model suite and build the next generation of high-definition models. The functionality and high performance of RI allows the RMS team to assess elements of model and loss uncertainty in a more robust way than before.  The framework of RI is flexible, modular and scalable, allowing the rapid integration of future knowledge with a swifter implementation and update cycle. The open modeling platform allows model users to extract more value from their claims experience to develop vulnerability functions that represent a view of risk specific to their data or to use custom-built alternatives. This enables users to perform a wide range of sensitivity tests and take ownership of their view of risk. Mohsen Rahnama is chief risk modeling officer and executive vice president, models and data, Cihan Biyikoglu is executive vice president, product and Moe Khosravy is executive vice president, software and platform at RMS

NIGEL ALLEN
link
September 06, 2019
A Need for Multi-Gap Analysis

The insurance protection gap is composed of emerging markets and high-risk and intangible exposures There cannot be many industries that recognize that approximately 70 percent of market potential is untapped. Yet that is the scale of opportunity in the expanding “protection gap”. Power outage in lower Manhattan, New York, after Hurricane Sandy While efforts are ongoing to plug the colossal shortage, any meaningful industry foray into this barren range must acknowledge that the gap is actually multiple gaps, believes Robert Muir-Wood, chief research officer at RMS.  “It is composed of three distinct insurance gaps — high risk, emerging markets and intangibles — each with separate causes and distinct solutions. Treating it as one single challenge means we will never achieve the loss clarity to tackle the multiple underlying issues.” High-risk, high-value gaps exist in regions where potential loss magnitude outweighs the ability of the industry to refund post-catastrophe. High deductibles and exclusions reduce coverage appeal and stunt market growth. “Take California earthquake. The California Earthquake Authority (CEA) was launched in 1996 to tackle the coverage dilemma exposed by the Northridge disaster. Yet increased deductibles and new exclusions led to a 30 percent gap expansion. And while recent changes have seen purchase uptick, penetration is around 12-14 percent for California homeowners.” On the emerging market front, micro- and meso-insurance and sovereign risk transfer efforts to bridge the gap have achieved limited success. “The shortfall in emerging economies remains static at between 80 to 100 percent,” he states, “and it is not just a developing world issue, it’s clearly evident in mature markets like Italy.” “The protection gap is composed of three distinct insurance gaps — high risk, emerging markets and intangibles — each with separate causes and distinct solutions” Robert Muir-Wood RMS A further fast-expanding gap is intangible assets. “In 1975, physical assets accounted for 83 percent of the value of S&P 500 companies,” Muir-Wood points out. “By 2015, that figure was 16 percent, with 84 percent composed of intangible assets such as IP, client data, brand value and innovation potential.”  While non-damage business interruption cover is evolving, expanding client demand for events such as power outage, cloud disruption and cyberbreach greatly outpace delivery. To start closing these gaps, Muir-Wood believes protection gap analytics are essential. “We have to first establish a consistent measurement for the difference between insured and total loss and split out ‘penetration’ and ‘coverage’ gaps. That gives us our baseline from which to set appropriate targets and monitor progress. “Probabilistic cat risk models will play a central role, particularly for the high-risk protection gap, where multiple region and peril-specific models already exist. However, for intangibles and emerging markets, where such models have yet to gain a strong foothold, focusing on scenario events might prove a more effective approach.” Variations in the gaps according to severity and geography of the catastrophe could be expressed in the form of an exceedance probability curve, showing how the percentage of uninsured risk varies by return period. “There should be standardization in measuring and reporting the gap,” he concludes. “This should include analyzing insured and economic loss based on probabilistic models, separating the effects of the penetration and coverage gaps, and identifying how gaps vary with annual probability and location.” 

Helen Yates
link
September 06, 2019
Severe Convective Storms: A New Peak Peril?

Severe convective storms (SCS) have driven U.S. insured catastrophe losses in recent years with both attritional and major single-event claims now rivaling an average hurricane season. EXPOSURE looks at why SCS losses are rising and asks how (re)insurers should be responding At the time of writing, 2019 was already shaping up to be another active season for U.S. severe convective storms (SCS), with at least eight tornadoes daily over a period of 12 consecutive days in May. It was the most May tornadoes since 2015, with no fewer than seven outbreaks of SCS across central and eastern parts of the U.S. According to data from the National Oceanic and Atmospheric Administration (NOAA), there were 555 preliminary tornado reports, more than double the average of 276 for the month in the period of 1991-2010. According to the current numbers, May 2019 produced the second-highest number of reported tornadoes for any month on record after April 2011, which broke multiple records in relation to SCS and tornado touchdowns. It continues a trend set over the past two decades, which has seen SCS losses increasing significantly and steadily. In 2018, losses amounted to US$18.8 billion, of which US$14.1 billion was insured. This compares to insurance losses of US$15.6 billion for hurricane losses in the same period. While losses from SCS are often the buildup of losses from multiple events, there are examples of single events costing insurers and reinsurers over US$3 billion in claims. This includes the costliest SCS to date, which hit Tuscaloosa, Alabama, in April 2011, involving several tornado touchdowns and causing US$7.9 billion in insured damage. The second-most-costly SCS occurred in May of the same year, striking Joplin, Missouri, and other locations, resulting in insured losses of nearly US$7.6 billion. “The trend in the scientific discussion is that there might be fewer but more-severe events” Juergen Grieser RMS According to RMS models, average losses from SCS now exceed US$15 billion annually and are in the same range as hurricane average annual loss (AAL), which is also backed up by independently published scientific research. “The losses in 2011 and 2012 were real eye-openers,” says Rajkiran Vojjala, vice president of modeling at RMS. “SCS is no longer a peril with events that cost a few hundred million dollars. You could have cat losses of US$10 billion in today’s money if there were events similar to those in April 2011.”  Nearly a third of all average annual reported tornadoes occur in the states of Texas, Oklahoma, Kansas and Nebraska, all states that are within the “Tornado Alley.” This is where cold, dry polar air meets warm, moist air moving up from the Gulf of Mexico, causing strong convective activity. “A typical SCS swath affects many states. So the extent is large, unlike, say, wildfire, which is truly localized to a small particular region,” says Vojjala. Research suggests the annual number of Enhanced Fujita (EF) scale EF2 and stronger tornadoes hitting the U.S. has trended upward over the past 20 years; however, there is some doubt over whether this is a real meteorological trend. One explanation could be that increased observational practices simply mean that such weather phenomena are more likely to be recorded, particularly in less populated regions.  According to Juergen Grieser, senior director of modeling at RMS, there is a debate whether part of the increase in claims relating to SCS could be attributed to climate change. “A warmer climate means a weaker jet stream, which should lead to less organized convection while the energy of convection might increase,” he says. “The trend in the scientific discussion is that there might be fewer but more-severe events.” Claims severity rather than claims frequency is a more significant driver of losses relating to hail events, he adds. “We have an increase in hail losses of about 11 percent per year over the last 15 years, which is quite a lot. But 7.5 percent of that is from an increase in the cost of individual claims,” explains Grieser. “So, while the claims frequency has also increased in this period, the individual claim is more expensive now than it was ever before.”  Claims go ‘Through the Roof’ Another big driver of loss is likely to be aging roofs and the increasing exposure at risk of SCS. The contribution of roof age was explored in a blog last year by Stephen Cusack, director of model development at RMS. He noted that one of the biggest changes in residential exposure to SCS over the past two decades has been the rise in the median age of housing from 30 years in 2001 to 37 years in 2013. A changing insurance industry climate is also a driver for increased losses, thinks Vojjala. “There has been a change in public perception on claiming whereby even cosmetic damage to roofs is now being claimed and contractors are chasing hailstorms to see what damage might have been caused,” he says. “So, there is more awareness and that has led to higher losses. “The insurance products for hail and tornado have grown and so those perils are being insured more, and there are different types of coverage,” he notes. “Most insurers now offer not replacement cost but only the actual value of the roofs to alleviate some of the rising cost of claims. On the flip side, if they do continue offering full replacement coverage and a hurricane hits in some of those areas, you now have better roofs.” How insurance companies approach the peril is changing as a result of rising claims. “Historically, insurance and reinsurance clients have viewed SCS as an attritional loss, but in the last five to 10 years the changing trends have altered that perception,” says Vojjala. “That’s where there is this need for high-resolution modeling, which increasingly our clients have been asking for to improve their exposure management practices. “With SCS also having catastrophic losses, it has stoked interest from the ILS community as well, who are also experimenting with parametric triggers for SCS,” he adds. “We usually see this on the earthquake or hurricane side, but increasingly we are seeing it with SCS as well.” 

Helen Yates
link
September 06, 2019
Risk in 2030

At this year’s RMS Exceedance conference in Miami, Robert Muir-Wood and Michael Steel imagined 10 future risks

Helen Yates
link
September 06, 2019
Ridgecrest: A Wake-Up Call

Marleen Nyst and Nilesh Shome of RMS explore some of the lessons and implications from the recent sequence of earthquakes in California On the morning of July 4, 2019, the small town of Ridgecrest in California’s Mojave Desert unexpectedly found itself at the center of a major news story after a magnitude 6.4 earthquake occurred close by. This earthquake later transpired to be a foreshock for a magnitude 7.1 earthquake the following day, the strongest earthquake to hit the state for 20 years. These events, part of a series of earthquakes and aftershocks that were felt by millions of people across the state, briefly reignited awareness of the threat posed by earthquakes in California. Fortunately, damage from the Ridgecrest earthquake sequence was relatively limited. With the event not causing a widespread social or economic impact, its passage through the news agenda was relatively swift.  But there are several reasons why an event such as the Ridgecrest earthquake sequence should be a focus of attention both for the insurance industry and the residents and local authorities in California.  “If Ridgecrest had happened in a more densely populated area, this state would be facing a far different economic future than it is today” Glenn Pomeroy California Earthquake Authority “We don’t want to minimize the experiences of those whose homes or property were damaged or who were injured when these two powerful earthquakes struck, because for them these earthquakes will have a lasting impact, and they face some difficult days ahead,” explains Glenn Pomeroy, chief executive of the California Earthquake Authority. “However, if this series of earthquakes had happened in a more densely populated area or an area with thousands of very old, vulnerable homes, such as Los Angeles or the San Francisco Bay Area, this state would be facing a far different economic future than it is today — potentially a massive financial crisis,” Pomeroy says. Although one of the most populous U.S. states, California’s population is mostly concentrated in metropolitan areas. A major earthquake in one of these areas could have repercussions for both the domestic and international economy.  Low Probability, High Impact Earthquake is a low probability, high impact peril. In California, earthquake risk awareness is low, both within the general public and many (re)insurers. The peril has not caused a major insured loss for 25 years, the last being the magnitude 6.7 Northridge earthquake in 1994. California earthquake has the potential to cause large-scale insured and economic damage. A repeat of the Northridge event would likely cost the insurance industry today around US$30 billion, according to the latest version of the RMS® North America Earthquake Models, and Northridge is far from a worst-case scenario. From an insurance perspective, one of the most significant earthquake events on record would be the magnitude 9.0 Tōhoku Earthquake and Tsunami in 2011. For California, the 1906 magnitude 7.8 San Francisco earthquake, when Lloyd’s underwriter Cuthbert Heath famously instructed his San Franciscan agent to “pay all of our policyholders in full, irrespective of the terms of their policies”, remains historically significant. Heath’s actions led to a Lloyd’s payout of around US$50 million at the time and helped cement Lloyd’s reputation in the U.S. market. RMS models suggest a repeat of this event today could cost the insurance industry around US$50 billion. But the economic cost of such an event could be around six times the insurance bill — as much as US$300 billion — even before considering damage to infrastructure and government buildings, due to the surprisingly low penetration of earthquake insurance in the state. Events such as the 1906 earthquake and even Northridge are too far in the past to remain in public consciousness. And the lack of awareness of the peril’s damage potential is demonstrated by the low take-up of earthquake insurance in the state. “Because large, damaging earthquakes don’t happen very frequently, and we never know when they will happen, for many people it’s out of sight, out of mind. They simply think it won’t happen to them,” Pomeroy says. Across California, an average of just 12 percent to 14 percent of homeowners have earthquake insurance. Take-up varies across the state, with some high-risk regions, such as the San Francisco Bay Area, experiencing take-up below the state average. Take-up tends to be slightly higher in Southern California and is around 20 percent in Los Angeles and Orange counties.  Take-up will typically increase in the aftermath of an event as public awareness rises but will rapidly fall as the risk fades from memory. As with any low probability, high impact event, there is a danger the public will not be well prepared when a major event strikes.  The insurance industry can take steps to address this challenge, particularly through working to increase awareness of earthquake risk and actively promoting the importance of having insurance coverage for faster recovery. RMS and its insurance partners have also been working to improve society’s resilience against risks such as earthquake, through initiatives such as the 100 Resilient Cities program. Understanding the Risk While the tools to model and understand earthquake risk are improving all the time, there remain several unknowns which underwriters should be aware of. One of the reasons the Ridgecrest Earthquake came as such a surprise was that the fault on which it occurred was not one that seismologists knew existed.  Several other recent earthquakes — such as the 2014 Napa event, the Landers and Big Bear Earthquakes in 1992, and the Loma Prieta Earthquake in 1989 — took place on previously unknown or thought to be inactive faults or fault strands. As well as not having a full picture of where the faults may lie, scientific understanding of how multifaults can link together to form a larger event is also changing.  Events such as the Kaikoura Earthquake in New Zealand in 2016 and the Baja California Earthquake in Mexico in 2010 have helped inform new scientific thinking that faults can link together causing more damaging, larger magnitude earthquakes. The RMS North America Earthquake Models have also evolved to factor in this thinking and have captured multifault ruptures in the model based on the latest research results. In addition, studying the interaction between the faults that ruptured in the Ridgecrest events will allow RMS to improve the fault connectivity in the models.  A further learning from New Zealand came via the 2011 Christchurch Earthquake, which demonstrated how liquefaction of soil can be a significant loss driver due to soil condition in certain areas. The San Francisco Bay Area, an important national and international economic hub, could suffer a similar impact in the event of a major earthquake. Across the area, there has been significant residential and commercial development on artificial landfill areas over the last 100 years, which are prone to have significant liquefaction damage, similar to what was observed in Christchurch. Location, Location, Location Clearly, the location of the earthquake is critical to the scale of damage and insured and economic impact from an event. Ridgecrest is situated roughly 200 kilometers north of Los Angeles. Had the recent earthquake sequence occurred beneath Los Angeles instead, then it is plausible that the insured cost could have been in excess of US$100 billion.  The Puente Hills Fault, which sits underneath downtown LA, wasn’t discovered until around the turn of the century. A magnitude 6.8 Puente Hills event could cause an insured loss of US$78.6 billion, and a Newport-Inglewood magnitude 7.3 would cost an estimated US$77.1 billion according to RMS modeling. These are just a couple of the examples within its stochastic event set with a similar magnitude to the Ridgecrest events and which could have a significant social, economic and insured loss impact if they took place elsewhere in the state. The RMS model estimates that magnitude 7 earthquakes in California could cause insurance industry losses ranging from US$20,000 to a US$20 billion, but the maximum loss could be over US$100 billion if occurring in high population centers such as Los Angeles. The losses from the Ridgecrest event were on the low side of the range of loss as the event occurred in a less populated area. For the California Earthquake Authority’s portfolio in Los Angeles County, a large loss event of US$10 billion or greater can be expected approximately every 30 years. As with any major catastrophe, several factors can drive up the insured loss bill, including post-event loss amplification and contingent business interruption, given the potential scale of disruption. In Sacramento, there is also a risk of failure of the levee system. Fire following earthquake was a significant cause of damage following the 1906 San Francisco Earthquake and was estimated to account for around 40 percent of the overall loss from that event. It is, however, expected that fire would make a much smaller contribution to future events, given modern construction materials and methods and fire suppressant systems.  Political pressure to settle claims could also drive up the loss total from the event. Lawmakers could put pressure on the CEA and other insurers to settle claims quickly, as has been the case in the aftermath of other catastrophes, such as Hurricane Sandy. The California Earthquake Authority has recommended homes built prior to 1980 be seismically retrofitted to make them less vulnerable to earthquake damage. “We all need to learn the lesson of Ridgecrest: California needs to be better prepared for the next big earthquake because it’s sure to come,” Pomeroy says. “We recommend people consider earthquake insurance to protect themselves financially,” he continues. “The government’s not going to come in and rebuild everybody’s home, and a regular residential insurance policy does not cover earthquake damage. The only way to be covered for earthquake damage is to have an additional earthquake insurance policy in place.  “Close to 90 percent of the state does not have an earthquake insurance policy in place. Let this be the wake-up call that we all need to get prepared.”

Helen Yates
link
September 06, 2019
Like Moths to the Flame

Why is it that, in many different situations and perils, people appear to want to relocate toward the risk? What is the role of the private insurance and reinsurance industry in curbing their clients’ risk tropism?  Florida showed rapid percentage growth in terms of exposure and number of policyholders If the Great Miami Hurricane of 1926 were to occur again today it would result in insurance losses approaching US$200 billion. Even adjusted for inflation, that is hundreds of times more than the US$100 million damage toll in 1926. Over the past 100 years, the Florida coast has developed exponentially, with wealthy individuals drawn to buying lavish coastal properties — and the accompanying wind and storm-surge risks. Since 2000, the number of people living in coastal areas of Florida increased by 4.2 million, or 27 percent, to 19.8 million in 2015, according to the U.S. Census Bureau. This is an example of unintended “risk tropism,” explains  Robert Muir-Wood, chief research officer at RMS. Just as the sunflower is a ‘heliotrope’, turning toward the sun, research has shown how humans have an innate drive to live near water, on a river or at the beach, often at increased risk of flood hazards.   “There is a very strong human desire to find the perfect primal location for your house. It is something that is built deeply into the human psyche,” Muir-Wood explains. “People want to live with the sound of the sea, or in the forest ‘close to nature,’ and they are drawn to these locations thinking about all the positives and amenity values, but not really understanding or evaluating the accompanying risk factors. “People will pay a lot to live right next to the ocean,” he adds. “It’s an incredibly powerful force and they will invest in doing that, so the price of land goes up by a factor of two or three times when you get close to the beach.”  Even when beachfront properties are wiped out in hurricane catastrophes, far from driving individuals away from a high-risk zone, research shows they simply “build back bigger,” says Muir-Wood. “The disaster can provide the opportunity to start again, and wealthier people move in and take the opportunity to rebuild grander houses. At least the new houses are more likely to be built to code, so maybe the reduction in vulnerability partly offsets the increased exposure at risk.” Risk tropism can also be found with the encroachment of high-value properties into the wildlands of California, leading to a big increase in wildfire insurance losses. Living close to trees can be good for mental health until those same trees bring a conflagration. Insurance losses due to wildfire exceeded US$10 billion in 2017 and have already breached US$12 billion for last year’s Camp, Hill and Woolsey Fires, according to the California Department of Insurance. It is not the number of fires that have increased, but the number of houses consumed by the fires.  “Insurance tends to stop working when you have levels of risk above one percent […] People are unprepared to pay for it” Robert Muir-Wood RMS Muir-Wood notes that the footprint of the 2017 Tubbs Fire, with claims reaching to nearly US$10 billion, was very similar to the area burned during the Hanley Fire of 1964. The principal difference in outcome is driven by how much housing has been developed in the path of the fire. “If a fire like that arrives twice in one hundred years to destroy your house, then the amount you are going to have to pay in insurance premium is going to be more than 2 percent of the value per year,” he says.  “People will think that’s unjustified and will resist it, but actually insurance tends to stop working when you have levels of risk cost above 1 percent of the property value, meaning, quite simply, that people are unprepared to pay for it.”   Risk tropism can also be found in the business sector, in the way that technology companies have clustered in Silicon Valley: a tectonic rift within a fast-moving tectonic plate boundary. The tectonics have created the San Francisco Bay and modulate the climate to bring natural air-conditioning. “Why is it that, around the world, the technology sector has picked locations  — including Silicon Valley, Seattle, Japan and Taiwan — that are on plate boundaries and are earthquake prone?” asks Muir-Wood. “There seems to be some ideal mix of mountains and water. The Bay Area is a very attractive environment, which has brought the best students to the universities and has helped companies attract some of the smartest people to come and live and work in Silicon Valley,” he continues. “But one day there will be a magnitude 7+ earthquake in the Bay Area that will bring incredible disruption, that will affect the technology firms themselves.” Insurance and reinsurance companies have an important role to play in informing and dissuading organizations and high net worth individuals from being drawn toward highly exposed locations; they can help by pricing the risk correctly and maintaining underwriting discipline. The difficulty comes when politics and insurance collide.  The growth of Fair Access to Insurance Requirements (FAIR) plans and beach plans, offering more affordable insurance in parts of the U.S. that are highly exposed to wind and quake perils, is one example of how this function is undermined. At its peak, the size of the residual market in hurricane-exposed states was US$885 billion, according to the Insurance Information Institute (III). It has steadily been reduced, partly as a result of the influx of non-traditional capacity from the ILS market and competitive pricing in the general reinsurance market.  However, in many cases the markets-of-last-resort remain some of the largest property insurers in coastal states. Between 2005 and 2009 (following Hurricanes Charley, Frances, Ivan and Jeanne in 2004), the plans in Mississippi, Texas and Florida showed rapid percentage growth in terms of exposure and number of policyholders. A factor fueling this growth, according to the III, was the rise in coastal properties.  As long as state-backed insurers are willing to subsidize the cost of cover for those choosing to locate in the riskiest locations, private (re)insurance will fail as an effective check on risk tropism, thinks Muir-Wood. “In California there are quite a few properties that have not been able to get standard fire insurance,” he observes. “But there are state or government-backed schemes available, and they are being used by people whose wildfire risk is considered to be too high.”

Loading Icon
close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.