Note — Feb 17, 2019

Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination

Joi Ito reviews some of the history of redlining in the US and makes a parallel between the statistics of “actuarial fairness” and our contemporary use of data by AIs, arguing that the same claims “that their job was purely technical and that it didn’t involve moral judgments” mirror what the Googles and Facebooks are doing today.

They argued that they were just doing their jobs. Second-order effects on society were really not their problem or their business. […]

Thus began the contentious career of the notion of “actuarial fairness,” an idea that would spread in time far beyond the insurance industry into policing and paroling, education, and eventually AI, igniting fierce debates along the way over the push by our increasingly market-oriented society to define fairness in statistical and individualistic terms rather than relying on the morals and community standards used historically. […]

So while redlining for insurance is not legal, when Amazon decides to provide Amazon Prime free same-day shipping to its “best” customers, it’s effectively redlining—reinforcing the unfairness of the past in new and increasingly algorithmic ways. […]

We must create a system that requires long-term public accountability and understandability of the effects on society of policies developed using machines. The system should help us understand, rather than obscure, the impact of algorithms on society. We must provide a mechanism for civil society to be informed and engaged in the way in which algorithms are used, optimizations set, and data collected and interpreted.