logo image

From client feedback, it is clear there is no one-size-fits-all approach to risk model validation and the associated processes involved to review new and updated third-party models, ready to be incorporated into a company-wide view of risk. And if you use your own IT environment, it’s a big task. A full model validation could take two to three cat modelers anywhere from three to six months to complete all required tasks to integrate a new model.

What’s involved in validating models using your own IT environment? You and your team are responsible for managing your on-premises modeling solution – including handling the data files and loading up the next stage – in what can be an elaborate and intense process. Consequently, a consistent theme during model validation for an on-premises solution is that data fidelity is always at risk. And the question arises whether the data is a true representation as it goes through the various validation stages. The need to create a like-for-like comparison of the model results from existing and new models is critical to understanding how the view of risk would change if the newest model were to be adopted.

Data fidelity can be compromised in the process, as the need for model reruns is commonplace and the possibility of human error is high, with a mix of incomplete datasets being transferred and/or incorrect model profiles being run. This constant need to check – and a subsequent lack of trust in the data – could slow down the change management process, resulting in delays as to when a new view of risk could be used within the organization.

How Organizations Validate New Models

When a new model is introduced, or an existing model is updated, there is usually a requirement to go through some form of validation to determine whether the model should be adopted by the business as a view of risk. The level of detail and analysis involved in that process will vary on numerous factors, including its business use case and the materiality to the business.

For on-premises model solution users, this level of analysis will determine the IT environment required. In a situation where the materiality is high enough to require a level of validation that extends beyond making sure the new software runs as expected, known as user acceptance testing (UAT), then a model testing environment will be needed to run validation tests. 

During this time, validation tests will require access to the latest model in question to run testing using exposure data. This will include a variety of potentially large datasets that would need to be uploaded to the testing environment to run the new models.

In a model validation workflow, there is typically a focus on the analysis and comparisons of three types of exposure datasets:

  1. Live portfolio data: This data is predominantly used to check for the expected change in portfolio losses prior to going live, comparing this data with any existing internal claims data where and whenever possible. This would typically be needed as part of a documented sign-off process to demonstrate the changes have been understood and accepted.
  2. Synthetic benchmark data: This data is used to help to validate changes from one model to another and to pinpoint specific geographical areas and types of exposure characteristics that differ from one risk model to another.
  3. Industry exposure data: This data is usually supplied by model vendors. Industry exposure datasets – such as the RMS® Exposure Data Module (EDM), can be an incredibly useful tool for assessing loss output from one model to another and to compare against widely reported industry losses.

During the process of analyzing and validating the models, trust and confidence in the data is critical. Often, the team cannot analyze the entirety of the portfolio in the new or updated view of risk. Running your full portfolio using an on-premises solution really is not practical because it could take days and even weeks if, as a reinsurer, you’re running hundreds of cedant contracts as a reinsurer for your analysis. As a result, any data corruption or fidelity issues that arise in the analysis could have an outsized impact on the validation process.

Streamlining Model Validation With Risk Modeler

A different approach to managing the process is required to ensure trust in the data is not eroded and to eliminate the steps where data fidelity is at risk. Cloud-based, cloud-native applications such as RMS® Risk Modeler™ are tackling these challenges head-on.

First, new model updates are instantly available on release to users in their current production environment. This means that the datasets already held in the production environment are available to run right away, without any cumbersome data transfers. Model profiles simply need to be updated to reflect a new version – and not set up from scratch.

Second, rather than having to set up any new environment, catastrophe modelers can run the same data in both environments without fear of user error, which is so common during the validation process of on-premises test environments.

Finally, because all modeling is managed within the same environment, users can leverage existing workflows from the production environment. This means there’s no need to rebuild the same workflows as the production environment, which eliminates human error from the validation process as well as the time required to rebuild workflows from scratch. Data fidelity is more assured and conducting validation checks on entire portfolios, where needed, is a simple process.

RMS clients are seeing many dramatic benefits with the Risk Modeler application. From a modeler’s perspective, many calendar weeks, which could roll into months if counting the equivalent work hours involved, are saved simply by a new model version that’s instantly available in a live production environment. Results during the model validation process can also be viewed side-by-side in the same system at the same time without having to extract and compare results between versions.

Our previous blog on overcoming the IT challenges and hidden costs of risk model validation, together with the ease of analyzing and validating datasets in one environment explained how Risk Modeler has helped clients eliminate much of the unnecessary burden involved with model validation.

In the next blog in this series, we will discuss how catastrophe modeling teams can further strengthen their relationships with the wider business. If you would like more information on Risk Modeler, please email us.

Share:
You May Also Like
link
Aerial view of buildings
February 07, 2022
How to Avoid the Hidden IT Costs of Risk Model Change Management …
Read More
link
Team meeting
August 23, 2021
One Year On: Getting to Work With the Risk Modeler Application …
Read More
Evan Cropper
Evan Cropper
Director, Product Marketing, RMS

Evan leads climate change and modeling product marketing for RMS, where he helps customers develop more data-driven strategies using physical risk analytics. He has extensive experience scaling technology in the digital enterprise with a passion for using data to deliver better business outcomes.

Previously, Evan has worked in various product management and marketing roles with Hitachi Vantara, Current - a subsidiary of GE Digital, and Cisco. 

He holds a bachelor's degree in Political Science from Emory University, and an MBA from Vanderbilt University’s Owen Graduate School of Business.

cta image

Need Help Managing Your Portfolio?

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.