New Data, New Challenges: How RMS Updated the Version 17 North America Earthquake Models
Ashley BerneroJune 08, 2017
Technology, data, and science continues to evolve when assessing and understanding earthquake risk; the new continually replaces the old. In AD 132, Chinese polymath Zhang Heng demonstrated his seismoscope, the first scientific instrument used tomeasure earth movement. Members of the Han Court stared at this urn-shaped, fine copper device, featuring eight evenly-spaced dragons representing eight directions around its middle, heads pointing down ready to record the location of the earthquake. Onlookers must have been in awe, as when an earthquake occurred, the dragon facing the direction of the quake would open its mouth and release a copper ball, falling into a decorative toad lying below.
Progress was made. At the time, Zhang Heng believed that these earthquake occurrences were caused by disturbances in the wind and air. 1900 years later, and with many awestruck moments along the way, our understanding of the forces that drive earthquake occurrences have improved enormously, as there are always developments in the science and technology that underpin this understanding.
Every five years or so, the U.S. Geological Survey (USGS) releases updated seismic hazard maps, known as the National Seismic Hazard Maps (NSHM), which give earthquake ground shaking probabilities across the U.S. for a range of probability levels. These maps incorporate the latest understanding of earthquake sources, recurrence rates and ground motion models, and represent the most comprehensive understanding of seismicity available for the U.S. These maps are widely utilized, and they are often used to influence public policy and building codes across the country. The latest iteration of these maps was released by the USGS in 2014, and now incorporate instrument recordings from 600 historical events, compared to 170 historical events from the previous 2008 maps.
Between 2008 and 2014, development staff at RMS actively engaged and worked alongside USGS to help build the 2014 NSHM. But even with the in-depth insight gained from our work together with the USGS, when the maps were released, RMS then invested over 100-person years to implement this update into the version 17 RMS North America Earthquake Models (NAEQ).
Running the USGS Data on a Business Clock
The challenge with this implementation was how to capture the wealth of information included within the USGS update, and ensure the model could still run in a practical time frame for our clients. Although RMS clients have impressive IT infrastructure, they shouldn’t have to use a super-computer to run simulations as the USGS does. Nor would it be acceptable to wait months for just a single simulation run. As a result, RMS was tasked with the challenge of getting the models to run on a business clock.
Why would this USGS update create such a run time issue? A good place to start is the new source model for California known as the Uniform California Earthquake Rupture Forecast Version 3, or simply put UCERF3. This model includes the addition of multi-fault ruptures – the USGS logic tree has more than 500 million California rupture possibilities. The number of events just for California alone is the same that existed for the entire previous RMS North America Earthquake Models.
Locking in the Goodness
RMS source modelers needed to optimize this California component within the RMS event set, as full inclusion of these events would render the model too slow to be practicable. Boiling this event set down while matching the USGS hazard was the statistical equivalent of juggling on a high wire, but after much effort RMS was successfully able to optimize it down to under 15 percent of the total number of unique events, all while preserving the average annual loss (AAL), return period hazard, total rate, and magnitude distribution predicted by the UCERF3 event set.
Earthquakes are by nature “tail risks” and it was crucial that events out in the tail, though very rare, were included and represented within the optimized event set. In aggregate, their contribution to return period losses is significant, and they help to quantify correlation between exposure in Northern and Southern California. The ability to accurately capture this correlation allows for more informed diversification and capital management decisions.
Beyond the implementation of the USGS update, the version 17 update also involved the complete re-calibration of the earthquake models, from site-specific hazard all the way through to damage and loss. This includes major updates to the geotechnical data, important for assessing ground motion amplification, liquefaction, and landslide susceptibility. Liquefaction is a localized sub-peril with a small footprint, but has the potential for severe damage – accurately capturing the extent and severity of liquefaction damage at a granular level is necessary for risk differentiation. This component update in version 17 led to the inclusion of a probabilistic liquefaction model, bringing the lessons learned from the 2011 Christchurch earthquake into our modeling for North America.
Insight from local experts in all three countries, from the U.S., Canada, and Mexico in the NAEQ Models informed updates to a variety of vulnerability curves, with significant updates to business interruption modeling. Additionally, tsunami assessment has been added to the modeling, the first time this secondary peril has been included with the RMS NAEQ models. Access to detailed tsunami damage data from recent events in Japan allowed for development of tsunami vulnerability, going beyond previous hazard-only solutions which focused purely on inundation depth.
This blog provides a small snapshot of the changes included within the new models, and we will share more in blogs over the coming weeks to highlight the key updates to individual model components.
So, just like Zhang Heng, RMS has used available technology to advance understanding of earthquake events, and we acknowledge that seismology is a very active area of research, and RMS will continue to monitor all new research, incorporating key findings into future model updates.
Why go through all of this painstaking development? Robust implementation of this latest USGS data matters, because it delivers real insight on things from tail risk in areas of high seismic risk, to risk selection in areas where soil changes rapidly over short distances. To deliver this insight at an unprecedented level of granularity, we incorporated the latest scientific findings from USGS into a loss model and re-calibrated every component of the model, all while preserving the precision of the underlying research and ensuring that the models can still be run in a practical time frame. This care and attention to detail, together with a focus on real world application, we believe sets these models apart in the current market.
Share:
You May Also Like
May 24, 2024
Unlocking The Power of Data Tagging: Data Governance on the Moody’s Intelligent Risk Platform
Ashley is a Senior Product Manager within the Applications Product Management team. At Moody's, Ashley is responsible for Risk Modeler, the model execution application within the Intelligent Risk Platform, focusing on integrating high-definition (HD) risk models as well as model customization.
Ashley has been at Moody's for nearly a decade and has held many roles including Client Support and Earthquake Model Product Management. She holds a bachelor’s degree in Industrial Engineering from Pennsylvania State University and a master’s degree in Civil Engineering from Stevens Institute of Technology.