The AI Now Institute held a symposium where Kate Crawford and Meredith Whittaker gave a talk summarizing the year in AI. Covers (1) facial and affect recognition; (2) the movement from “AI bias” to justice; (3) cities, surveillance, borders; (4) labor, worker organizing, and AI, and; (5) AI’s climate impact. Loads and loads of links, useful as an overview, and as a nice starting point to catchup on what’s going on in the field.
There has also been wider use of affect recognition, a subset of facial recognition, which claims to ‘read’ our inner emotions by interpreting the micro-expressions on our face. As psychologist Lisa Feldman Barret showed in an extensive survey paper, this type of AI phrenology has no reliable scientific foundation. But it’s already being used in classrooms and job interviews — often without people’s knowledge. […]
And let’s be clear, this is not a question of needing to perfect the technical side or ironing out bias. Even perfectly accurate facial recognition will produce disparate harms, given the racial and income-based disparities of who gets surveilled, tracked and arrested. As Kate Crawford recently wrote in Nature — debiasing these systems isn’t the point, they are “dangerous when they fail, and harmful when they work.” […]
As part of the deal, Amazon gets ongoing access to video footage; police get access to a portal of Ring videos that they can use whenever they want. The company has already filed a patent for facial recognition in this space, indicating that they would like the ability to compare subjects on camera with a “database of suspicious persons” — effectively creating a privatized surveillance system of homes across the country.