Seen in → No.165
Algorithmic tools to “help” hiring decisions are multiplying and based in shoddy (to say the least) science. Mona Sloane for OneZero looks at this issue, at the use of AI more broadly, at how auditing is presented as a solution but often falls into a trap of too much collaboration, and offers a few ideas and solutions for better auditing and better protection. Contestability by design and leveraging government procurement for broader influence are two promising directions.
[W]e are facing an underappreciated concern: To date, there is no clear definition of “algorithmic audit.” Audits, which on their face sound rigorous, can end up as toothless reputation polishers, or even worse: They can legitimize technologies that shouldn’t even exist because they are based on dangerous pseudoscience. […]
We should focus on turning to library science experts, organizational ethnographers, and historians to develop strategies for documenting how digital technologies are used, in what context, and to what end — regardless of whether they classify as an “automated decision-making tool” or “A.I.” in that moment.