Insurers' Big Data use brings on a regulatory headache

09 May 2019 12:41

Insurers’ use of Big Data tools is posing a headache for the sector’s regulators, as a new study shows more than half of companies are already using artificial intelligence or plan to do so.

Supervisors at the EU's sector regulator, the European Insurance and Occupational Pensions Authority, are starting to worry about the implications of “black box” algorithms for fairness, financial stability and ethics — fears that might well translate into regulatory action.

Online sources offer a vast pool of information that insurers can use, from how people spend their money to how much they drink or how far they have jogged this week. Artificial intelligence tools offer new ways to crunch these data.

A study published yesterday by Frankfurt-based Eiopa shows that 31 percent of health and car insurers are already using machine-learning tools and a further 24 percent have them under development. That raises concerns from racial discrimination to uncheckable errors that could destabilize the economy.

Data nothing new

Insurers have long used a host of data to run their business: taking detailed medical histories to price health premiums, for example. But at least this has been done by a human, whereby techniques can be tracked and sums audited. It’s far harder for regulators to monitor what’s going on inside a computer.

It's one thing to use online data analytics for, say, targeted marketing of insurance. Someone cooing over puppy pictures online might be amenable to buying a policy for their own pooch, and this seems relatively unproblematic.

But using data to decide on premium levels is more of a worry. An exhaustive analysis of someone’s genetic code, for example, could see high-risk customers priced out of the health insurance market.

Genetic data

DNA analysis is indeed making a slow inroad into European health insurance markets: One company is already using genetic data, with another nine due to do so within the next few years, Eiopa says. Yet that could fall foul of laws against discrimination.

In many sectors insurers are already forbidden from differentiating based on, for example, gender; a 2011 case at the EU courts forbade insurers to offer women lower motor premiums.

If a premium is priced using the output of complex number crunching, however, insurers may end up discriminating without even realizing; for example because the inputs they use, such as shopping habits or fitness app data, correlate with sex or race.

In January, regulators in New York cautioned insurers against buying opaque datasets from external providers, in part because of fears that they may conceal prohibited forms of discrimination.

Even using information such as e-mail addresses, usually regarded as less sensitive under EU data rules, can correlate with factors protected by law, Eiopa says.

Sensitive about price sensitivity

When it comes to how data are employed, there are distinctions to be made: Using information to match premiums to risk is very different to using it to figure out how much you can sting someone.

Technology already enables sites to quote a higher price for flight tickets to those booking through a new iPhone, which might be deemed as having more affluent users compared to other smartphone brands.

But in the insurance sector, that kind of price determination could be unethical, and “potentially result in illegal price discrimination” if not in line with objective actuarial norms, Eiopa says.

Finally, there is a financial stability concern. AI is only as intelligent as the inputs it gets; small errors can rapidly spiral. Checking the prudential soundness of calculations that emerge from a black box is not easy.

Supervisors will need the tech savvy to audit an algorithm, and the data it relies on, to make sure insurers are not taking unnecessary risks.

Data safeguards

In theory, the EU's privacy rulebook — the General Data Protection Regulation — requires companies to explain how data are going to be used, with particular strictures on sensitive information such as health or criminal records.

But regulators are already skeptical of whether consumers read, or are even aware of, lengthy terms and conditions over data use. Information policies become still harder to explain to the man on the street if they are going to be processed by mysterious algorithms.

"It is … debatable how firms can meet GDPR's requirement to explain to consumers in a meaningful way the functioning of [Big Data analytics] tools,” Eiopa concludes.

The agency now says it wants to look further at the implications for supervision, such as whether to amend the EU's Solvency II insurance laws to ensure procedures can always be audited. The Big Data regulatory headache is beginning to bite.

Related Articles

No results found