Great short piece on reframing the conversation about ethics around AI, why using “fair” and “transparent” is unreliably interpreted, how these discussions have (so far) lead the field of AI to believe it is neutral, yet it "both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful.” More generally, Kalluri wants the discussions to be framed around how/when/if AI shifts power and to integrate in its development every community who would lose power or suffer more focused attention through these tools.
Researchers should listen to, amplify, cite and collaborate with communities that have borne the brunt of surveillance: often women, people who are Black, Indigenous, LGBT+, poor or disabled. […]
The group is inspired by Black feminist scholar Angela Davis’s observation that “radical simply means ‘grasping things at the root’”, and that the root problem is that power is distributed unevenly. […]
Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. […]
Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful.