How would you feel about artificial intelligence setting your insurance premiums thereby determining the price you pay based on, potentially, millions of inputs collected from social media, your spending history and technology embedded in your home and car? Although such “black box” algorithms aren’t in use yet, the struggling insurance industry is giddy about their long-run potential. The enthusiasm needs to be matched by regulators taking much-needed steps to establish guardrails.
After a couple of rocky years, insurance companies are experimenting with AI to evaluate claims, screen policyholders and control day-to-day costs. Those efforts make sense and may ultimately smooth the industry’s boom-bust cycles, perhaps even lowering costs for policyholders. But regulators nationwide must follow the lead of Colorado and New York to ensure that profitability improvements don’t come at the expense of consumer equity and privacy.
First, there’s the issue of data-driven bias, a long-standing problem in insurance that predates the recent flurry of AI excitement. Several studies have shown that the use of credit scores in underwriting car insurance has tended to push premiums higher for people of color, even after holding driving history constant. Similar effects have been identified in the use of zip codes. Now, AI’s proliferation could bring a plethora of additional variables into the process, including social media posts, purchasing habits and higher-frequency location data. In a worst-case scenario, AI could mean data-driven bias on steroids.
Second, there are valid concerns about consumer privacy itself. To feed their data-hungry algorithms, many car insurers are doubling down on the promise of “telematics” programs, which track driving habits through smartphone applications or, increasingly, embedded technology in cars themselves, as New York Times reporter Kashmir Hill documented recently. Smart sensors have started to bring similar analysis to homeowners and business insurance, creating an opening for carriers to effectively intrude on every aspect of customers’ lives. In the case of auto insurance, the New York Times found that some were giving their data away unwittingly.
Recent developments in Colorado and New York show that at least some states are paying attention. Last year, the Colorado Division of Insurance released regulations on AI’s use by life insurers, and this year the New York State Department of Financial Services issued guidelines for all insurers in the state. Other jurisdictions must follow suit to protect consumers.
Among other things, the frameworks seek to establish testing protocols to ensure that AI systems using external consumer data aren’t spitting out discriminatory results. Such tests are very hard to design and can themselves be subject to some of the same problems as the AI outputs. Counterintuitively, companies need to make assumptions about customers’ race in order rule out racism in the algorithms. Even then, they need to figure out if people of different races, creeds and sexual orientations pay a similar price per unit of risk—which isn’t the same as paying the same premium.
Of course, data sharing can have substantial benefits. One insuretech company, Root Inc., has argued that telematics represent a much fairer way to price auto insurance. Do you tend to accelerate rapidly? Do you slam on the brakes too much? Those are extremely powerful inputs for any risk model that could help banish dated and discriminatory data points like credit scores from underwriters’ toolkits.
Elsewhere, building sensors could help companies zero in on the hard-to-price risks of climate change—one of the existential questions facing homeowners and flood insurance. If you give insurance companies exactly what they need, you theoretically reduce the incentive for them to probe into other less risk-correlated elements of your life. For their part, companies say that telematics data is collected with user consent. General Motors Co. said it ended information sharing with two data brokers following the New York Times report—emphasizing the need for greater oversight in this area.
In another take, insuretech company Lemonade Inc. Chief Executive Daniel Schreiber has argued that bias is a feature of overly simple data models. If you base risk on five factors, he has said, it’s obvious that some people will suffer from discrimination. A simple model may accurately predict (in one hypothetical) that men are worse drivers than women on average, and then proceed to overcharge all men for insurance—even the relatively good drivers among them. Complex AI-driven models with millions of inputs, on the other hand, may do a better job of separating the good male drivers from the bad ones. Maybe one day, anyway.
Ultimately, insurance is all about accurately understanding risk and pricing it fairly so that safe drivers aren’t subsidizing reckless ones and people in more stable climates aren’t backstopping hurricane risk in coastal Florida—the classic “moral hazard” problem that results in perverse incentives. Although the risk takers may not think so, the entire market is generally better off when risk is appropriately priced—a massive data challenge that makes AI such a powerful tool for insurance. Lemonade says that AI is increasing customer satisfaction and helping expedite claims. But the technology’s full promise is still years in the future and riddled with technological and regulatory obstacles. For all the encouraged customers, there are many others that have gotten irate after flawed and annoying interactions with bots.
Clearly, the industry has a right to innovate. In recent years, it’s been challenged by inflation, natural disasters and runaway litigation costs, all of which hit profitability. Creative risk management solutions are in everyone’s interest. But ultimately, states must take a proactive stance to make sure that such advances don’t have unintended consequences.
Jonathan Levin is a columnist focused on U.S. markets and economics. Previously, he worked as a Bloomberg journalist in the U.S., Brazil and Mexico. He is a CFA charterholder.
This article was provided by Bloomberg News.