logo image

Insurance Solutions

Formerly Moody’s RMS

Underwriters are on the front line of insurance; they make the go/no-go decisions on whether new risks will arrive in the portfolio. Given the importance of their role as gatekeepers to new business, underwriting teams need the best available risk data for each property to empower them to make informed decisions. Downstream functions, such as portfolio managers and reinsurance purchasers through to brokers, have become accustomed to utilizing the best risk data that catastrophe modeling can offer, whereas underwriters have often struggled to get the risk insight they need. In our experience, this is not due to a lack of data, which is easier to access than ever, but more to do with quality. The data is often simply not good enough to be relied on to support underwriting decisions.

With their vital gatekeeper role, underwriters are constantly in the spotlight, having to give answers about why they took on certain risks and why they made a particular decision. Basing a decision on data that they can’t defend or substantiate is frustrating and potentially dangerous. Good-quality data, accessible to underwriters when required to support their expert decision-making, will build confidence and improve efficiency. And, overall, getting the best data built into the underwriting process pays dividends for the entire business.

Moving along the insurance workflow without knowing the full risk behind the policies being taken on makes it difficult to know whether guidelines on risk appetite are being met, or whether the risks are being priced correctly. A managing general agent (MGA), for example, needs to be confident that the prescribed underwriting guidelines were being followed. When portfolio roll-up occurs, there could be big differences between the business that should have been taken on and what is now in the portfolio – with both portfolio managers and underwriters finding it hard to “square the circle.” Differences in risk analytics could also become very obvious as a risk moves up and down the insurance value chain if the analysis is inconsistent.

Hazard Only

Many underwriters rely on data that focuses on hazard only, but this approach fails to account for today’s wide range of detail that is often captured and can have significant implications on susceptibility. If there is data available that would impact a risk decision, why not use it to your advantage? In other circumstances, risk assessment can be even more crude, with decisions based simply on whether the location is, or is not, in or out of a flood or hurricane wind speed zone.

The data used by underwriters can range from free, publicly available data to a patchwork of paid-for sources. Data is often outdated and sometimes difficult to validate. Publicly available data might not really be designed or appropriate for underwriting. Even when data is purchased, an insurer can end up using multiple vendors for different regions, causing difficulties when integrating data flows into business-critical underwriting systems. If the risk decision is not based on a comprehensive view of risk or is incompatible with risk modeling used in the business, the problems raised earlier remain.

And in our experience, where data is too broad at the hazard level, it can simply mean that the underwriting is not competitive, there is not enough detail to differentiate effectively. For perils such as flood that are highly granular, detail is vital, or else decisions taken could be similar to taking a gamble.

Could underwriters get the location-level insight they need from in-house resources? There may be a catastrophe risk modeling team within a business, but these teams are busy enough. They can perhaps supply ad-hoc analyses for a large, industrial premises, for instance, where the high-value nature and rich characteristics from a site inspection warrants a full model run. But, for the most part, their focus is on the portfolio level and beyond. For high-volume residential locations, in-house teams are often not equipped to support this analysis. Serving underwriters with the risk data they require would be resource intensive (both cost and time) and often still not satisfy the speed to be competitive.

And, not every business uses risk models directly in-house or perhaps it receives risk analytics from a third-party partner. Having direct access to good-quality data, whenever and wherever it is needed, will make a real difference in building a view of risk. So, how can underwriters benefit from the same high-quality risk data used by the wider industry?

Same Data at All Levels

What has changed is that the same sophisticated cat risk analytics used for portfolio and reinsurance processes is now available for underwriting and without the need to run a cat risk model. RMS data solutions are derived from our market-leading global suite of catastrophe models. They provide location-level data that is ready to be used by underwriters for geocoding, hazard, exposure, risk scoring, or loss costs to support whichever risk decision is necessary, from screening through to pricing. Within the business, using the same data for underwriting that is used by risk models promotes a common source of insight throughout the life cycle of a risk.

These data solutions are available across the breadth of perils and regions that RMS covers, eliminating the need to patch-in multiple vendors and ensuring a consistent approach to science. Risk scores are widely used in the industry and typically provide a score from 1–10 to quickly assess and screen a specific risk. RMS risk scores provide confidence compared to other vendors as they directly relate to RMS model output, to provide transparency into the data and assumptions that support a certain risk score.

With insights ready to implement directly into high-performance underwriting systems via the RMS Location Intelligence API or as a ready-made application – SiteIQ™ – to give to your underwriters, location data is now readily available to whoever needs it. This data goes beyond hazard and allows you to benefit from the information you collect and refine, to acknowledge that varying vulnerability for the same hazard can have vastly different implications depending on the building stock. Vulnerability factors such as occupancy, construction, year built, number of stories, and basement presence all have a significant impact on risk to a location.

There is a clear need to ensure good-quality risk data is available to underwriters. The same advanced model science used by upstream functions can be brought forward into the underwriting process to provide a common source of insight. This allows you to augment your underwriting expertise with data you can trust to help lower loss ratios, make better decisions quicker, build a high-quality book of business, and avoid any negative surprises from risks that occur from differing hazard views derived from other third-party sources.

Share:
You May Also Like
link
Mexico Beach Hurricane
June 27, 2019
RMS Location Intelligence API: Underwriting with 20/20 Vision …
Read More
Shaheen Razzaq
Shaheen Razzaq
Vice President, Product Management, Moody's

Shaheen has over a decade of experience delivering risk management solutions to insurance and reinsurance companies. As Vice President - Product Management at Moody's, he is responsible for introducing new, innovative applications to the market.

Before joining Moody's, Shaheen was Risk Aggregations Business Unit Manager at Room Solutions Ltd. and led a department that designed and developed Exact Advantage, a popular, next-generation offshore energy risk aggregation tool. At Room Solutions Ltd, he then managed a global development team that built and successfully implemented several contract and exposure management solutions for large European commercial insurance organizations.

As a regular speaker at industry events, Shaheen often gives presentations about the business value technology delivers to organizations that manage catastrophe and non-catastrophe risk.

Shaheen holds a master’s in business and information technology from Kingston Business School.

Jordan Byk
Jordan Byk
Senior Director, Data Product Management, RMS

Jordan Byk is head of the Data Product Management team at RMS. He leads a product team responsible for global geocoding, industry exposure databases, industry loss curves, peril rating databases, building attribute databases, and hazard, risk score, loss cost data available via the Location Intelligence API and several RMS applications. 

Over the last 12 years at RMS, Jordan has managed a broad range of products, including a weather derivatives platform, global geocoding, exposure management and data quality applications, hazard data, industry exposure data, building attributes data, and the risk score and loss cost data.

With several colleagues, Jordan holds a patent for “Resource allocation and risk modeling for geographically distributed assets” providing a methodology for preparing challenging asset classes such as energy, utility and transportation networks for accurate catastrophe modeling. Jordan received his MBA in international business and marketing from Rutgers University and his bachelor's degree in business administration and computer science from Carnegie Mellon University.

cta image

Need Help Managing Your Portfolio?

close button
Overlay Image
Video Title

Thank You

You’ll be contacted by an Moody's RMS specialist shortly.