[ad_1]
This post is part of a series sponsored by TransUnion.
Social and regulatory attention has been using fairness and equity as a lens to evaluate the outcomes of existing processes like insurance underwriting. For example, a new law in Colorado, which will come into effect at the beginning of 2023, will require insurers to provide analytical evidence that their operational processes that use inputs of consumer data and predictive models do not result in unfair discrimination against certain consumer groups. Credit-based insurance scores (hereinafter referred to as insurance risk scores) are one example of the inputs used in these operational processes
Insurance risk scores have become essential for insurers as they seek to quickly and accurately underwrite policies and attract new business. But the relationship between credit information and insurance risk evaluation is technical and complex. Most consumers are simply unaware that insurance risk scores are used in insurance underwriting, and when they get incomplete information about it, they may distrust their use.
This reality highlights two dimensions of fairness — the fairness of outcomes and the consumer perception of fairness towards these practices. These questions of fairness are important, and insurance companies must be able to demonstrate that their practices won’t result in unfair outcomes and seem fair to consumers.
Fairness testing — the need to align on best practices
Actuarial science and predictive modeling are decades old and well honed. The insurance industry has become very good at building models that are empirically sound, demonstratively strong and stable. Within the insurance industry, however, fairness testing research and practice is still in its infancy, but it is more robust in academia.
Much of the current focus is on race, ethnicity and income; however, it’s against the law for insurance companies and consumer-reporting agencies to collect or store information on race and ethnicity, which makes it very difficult to analyze fairness and equity along these axes. The industry will need to evaluate options for capturing or estimating these characteristics.
Next, there will need to be a standard definition of fair. From a data science and predictive modeling perspective, a fair outcome is one in which the predicted outcome aligns with actual outcomes based on some measure of statistical significance. On the other hand, some would say that fair means equal treatment in outcomes across the population. As the industry works to define fair, consideration should be given to both variance in actual outcomes and population profile — a behavior-adjusted fair outcome.
Consumer perception of fairness
As for consumer perception of fairness, one of the most important academic researchers on this subject is Stanford University’s Dr. Barbara Kiviat, who studies social attitudes towards credit scoring. In particular, she has elaborated on the concept of logical relatedness in the use of credit scoring: Consumers resist or resent the application of credit scores to areas of their lives if they don’t see a clear connection between the two. And many consumers and legislators alike do not currently view credit as something logically related to insurance, which leads them to see insurance risk scores as unfair.
Dr. Kiviat, however, points out that “If logically unrelated, morally heterogeneous data don’t seem so bad if using them promises to expand the market to previously excluded individuals.” In other words, even if consumers and policymakers don’t see a logical connection between insurance risk scores and insurance pricing, will they appreciate their role in expanding the market?
Another important finding in Dr. Kiviat’s research is that consumers are more likely to find a credit-based score fair if they know it does not misclassify risks. As TransUnion has shown with the accommodations around the CARES Act, insurance risk scores can be tailored to exclude consideration of factors that are outside the control of the consumer and still remain stable and predictive.
An opportunity to raise awareness and educate consumers
Based on Dr. Kiviat’ research, in order for someone to accept use of consumer data, such as insurance risk scores, they must be provided with a clear causal theory that explains why and how the scoring system works. Insurers have the opportunity to provide a clearer understanding by taking a number of steps to raise awareness and educate consumers on the use of credit information in underwriting, including:
- How and why credit information is used
- The benefits and opportunities it provides to consumers
- The protections and rights afforded to consumers in the current process
What would an education campaign about insurance risk scores look like in practice? TransUnion specifically recommends that insurers:
- Provide consumers with an explanation of what insurance risk scores are, how they differ from financial credit scores and how insurers use them in combination with other variables to underwrite policies.
- Explain to consumers why insurance risk scores are used in underwriting, with a focus on the benefits to consumers.
- Provide consumers information on the protections and rules governing insurance risk scores, including rights that consumers have to access, dispute and direct how their personal credit information is used.
- Describe to consumers the credit behaviors that can lead to an improvement in their score. By providing consumers with this information, you can empower them to control and manage their personal credit history, which can lead to greater financial inclusion and lower costs.
Finally, insurers must take their advocacy mission to local and national legislators, as well. Teams working with insurance risk score-informed products should work hand in hand with corporate government relations teams to identify potential trouble spots. Now is a great time to make your colleagues in government relations aware of this topic and ensure they are working to engage on your company’s behalf.
Topics
Market
Interested in Market?
Get automatic alerts for this topic.
[ad_2]