In a recent survey of our customers, we were exploring the data used and needed by underwriters, and we asked whether they agreed with a set of statements. To the statement “Brokers give all the data I need to be a good underwriter,” the fraction that agreed was 0%.
This is not how it is supposed to work. According to IRMI: “A proposal for insurance submitted to an underwriter…implies more than simply a completed application unless the application contains all the information needed by the underwriter.” And yet, zero percent of underwriters in our survey say they get what they need in proposals submitted from the brokers.This is a problem. Is the problem that the brokers are hiding something? I doubt it. I’m willing to bet that if we were to survey brokers, the fraction would have been 100%. The brokers absolutely want the underwriters to have the data they need so that they can get a fast response with appropriate pricing, terms and conditions.The problem is that nobody knows what data are needed for underwriting. The underwriters know they need more than they get, and surely the brokers can provide more, they think. The brokers think that providing all the data the underwriter thinks they need would be too much of a burden for their clients, and besides any extra data won’t be used to help solve their clients coverage needs — they worry it will only result in exclusions or denial.Amidst this gap in expectations, there is a revolution in data availability for insurers and reinsurers. Government databases, social media, IoT, and the digitization of published content (such as Praedicat’s mining of published science) are providing an explosion of new data for underwriting. These new data hold the promise of opening up vast new opportunities for better underwriting, and as insurers increasingly harness them, underwriters will compete to offer better, broader, tailored coverage. This opportunity needs to be encouraged by brokers and insurers alike, who are aligned on this outcome.In the future, the data requirements of the submission will become vastly simpler. What is required is unique company identifiers that facilitate linking to databases from third-party data providers. The standardization of these identifiers needs to be a priority of carriers and brokers alike, as well as data providers. Ultimately, brokers will need to provide less data, but it will need to be the right data, namely identifiers for rapid linkage to other data sources.One area where we have found the data deficit to be particularly egregious has been in casualty reinsurance. Some of the respondents to our survey were reinsurance underwriters. Many clients have told us that they don’t get information on all the companies in a cedant’s portfolio and, therefore, are not able to even determine if they have aggregation across insurers by company, much less by named peril. We can’t imagine that a reinsurer has all the information necessary to be a good underwriter without basic information on the companies in a portfolio.
We are seeing early work to bring about an industry consensus on data submission standards in reinsurance. For instance, the Cambridge Centre for Risk Studies is developing a multi-line exposure data schema in a project called the Global Exposure Accumulation and Clash (GEAC) project. This is a great start.
0% of surveyed underwriters agree that the brokers give all the data they need to be a good underwriter.