Note — Dec 16, 2018

After a Year of Tech Scandals, Our 10 Recommendations for AI

Seen in → No.61

Source → medium.com/@AINowInstitute/after-a-year-of-tech...

The AI Now Institute’s annual report, focusing on 10 recommendations for AI. Lots of good directions to consider and keep thinking on in there. Parts that especially drew my attention: the sector-specific approach, affect recognition, governance, trade secrecy, and the detailed accounting of the “full stack supply chain” which is not something I’d seen elsewhere.

At the core of these cascading AI scandals are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? […]

Communities should have the right to reject the application of these technologies in both public and private contexts. Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance. […]

Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level. […]

This should include rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. […]

The full stack supply chain also includes understanding the true environmental and labor costs of AI systems. This incorporates energy use, the use of labor in the developing world for content moderation and training data creation, and the reliance on clickworkers to develop and maintain AI systems.