Ana Brandusescu explains how Responsible AI and ethics in AI dont go far enough, the question can’t only be how to build something responsibly but if it should be built in the first place. There must be stronger sanctions, repercussions including people behind these products going to prison, and much much broader consultations with varied specialists in other fields. (The article includes loads of references to research and articles.)
Murgia and Yang (2019) report that US tech companies like Microsoft are collaborating on on facial recognition technology research with China’s National University of Defense Technology under China’s Central Military Commission. […]
Responsible AI needs to include regulations and sanctions that are enforced so AI systems and the humans involved in designing, building, and implementing them will be held accountable – and be responsible for the consequences, good and bad. […]
Power structures (and I don’t mean power grids) in AI and their impact on product design and development require interrogation by outsiders. This means almost everyone who isn’t technology industry. For example, regulators, social workers, ethnographers, grassroots organisations, local activists working on digital and non-digital projects, to name a few. Collective action also includes unpacking power dynamics in user interactions