M.C. Elish and danah boyd of the Data & Society Institute going over some of the issues with the current state of AI and how such algorithms could be made more ethically and ethical. Most importantly here, they posit that “in nearly every instance the imagined capacity of a technology does not match up with current reality. As a result, public conversations about ethics and AI often focus on hypothetical extremes.”
They propose 3 questions to “surface everyday ethical challenges raised by AI.”
- “What are the unintended consequences of designing systems at scale based on existing patterns in society?”
- “When and how should AI systems prioritize individuals over society, and vice versa?”
- “When is introducing an AI system the right answer—and when is it not?”
And yet, AI is not, and will not be, perfect. To think of it as such obscures the fact that AI technologies are the products of particular decisions made by people within complex organizations. AI technologies are never neutral and always encode specific social values. […]
When it comes to AI and ethics, we need to create more robust processes to ask hard questions of the systems we’re building and implementing. In a climate where popular cultural narratives dominate the public imaginary and present these systems as magical cure-alls, it can be hard to grapple with the more nuanced questions that AI presents.