Picture of a color-coded credit score scale
Photo by CafeCredit.com, Flickr CC

It seems that algorithms are shaping more and more of our world. However, algorithms — rule or process-based calculations most often done by computers — have been an important part of society for centuries. In her new research, Barbara Kiviat explores how policymakers respond to one not-so-new use of algorithms and the predictions they can produce: how insurance companies use credit scores to set prices.  

Kiviat examines thousands of pages of documents and 28 hours of testimony from state, congressional, and professional debates and investigations around insurance companies’ use of credit scores. Credit scores are the output of algorithms that rely on huge amounts of consumer financial information. Insurance companies use these scores to set prices based on predictions of how often a customer will make insurance claims, so customers with lower credit scores have higher prices. In the insurance industry there is widespread agreement that this practice is justified because of “actuarial fairness.” In other words, the data is fair to use to set prices because credit scores do actually predict how often someone will use their insurance.

However, policymakers do not agree with the insurance industry’s argument of credit scores as “actuarially fair.” Instead, policymakers draw on ideas of “moral deservingness.” They try to understand whether or not people were responsible for bad or good behaviors that corresponded to their current credit score and insurance cost. Policymakers objected to the use of credit scores when they did not reflect policymakers’ understandings of what counted as good or bad behavior. For instance, policymakers sought to include sections for “extraordinary life circumstances” in insurance regulation that would not penalize consumers for poor credit scores resulting from, for example, the death of a spouse or child.

This research shows that policymakers do not object to predictive practices because they are mysterious or confusing. Rather, they object when algorithmic results disagree with existing assumptions of what is good or bad behavior. Kiviat’s findings are important to consider as algorithms and the predictions they create are used in more of our social and economic life, such as for identifying students at “high-risk” of poor academic outcomes, informing policing by “predicting” crime, or showing job ads to some individuals and not others.

Resistance to algorithms based on fairness can only go so far. Who will be protected from the use of algorithms if we think they are unfair only for “good” people?