I’m not quite sure how to frame this piece by Lizzie O’Shea. For longtime readers, there isn’t that much new in here, although it is a very good overview of the multiple ways in which AI is currently encroaching on people’s liberties while making a series of mistakes and failures which would be comical if they weren’t so serious.
However, there is something to the way O’Shea assembles it and touches on a few other ideas we’ve seen around here. First, how a number of big tech oligarchs are afraid of AI in a way that caricatures their ideology. They fear an “artificially intelligent apex predator,” basically wondering, without realising it, ‘what might I do if I were supremely intelligent? Oh, probably destroy the microbes who created me.’ In other words, they think a supremely intelligent AI would be a supremely intelligent version of themselves. Second, the idea we’ve seen a few times that someone’s utopia is someone else’s dystopia. Well here someone’s future dystopia (dangerous AIs threatening a way of life) is someone else’s present day dystopia.
Finally, her conclusion and most important insight is intriguing. If we assume that some AI will be useful, if we realise that they are decision-making machines, if we understand that they will make mistakes, what then? O’Shea suggests going the way of “a more capacious idea of rights—including the right to appeal, the right to privacy, and concepts of individual human dignity.” And coming up with the right kind of regulation which “would require documentation, controlled experimentation, caution, and consideration.” Again, not specifically new but the idea that AIs “process us as raw material and treat us as deidentified subjects defined by our metadata,” echoed for me carbon emissions. Basically, if AIs and emissions are going to exist, then thorough accounting, accountability, and pricing of externalities might help society to control these things. I’m not sold on either one, but perhaps if there is a common(ish) set of tools and approaches to controlling two types of excesses, maybe those two pushes for control can reenforce each other?
[F]ormer Detroit police chief James Craig estimated facial recognition failed 96 percent of the time. More generally, research indicates that over 85 percent of artificial intelligence projects used in business fail. The failure rates of these automated systems are a stunning indictment of technology that is the object of such significant financial and political investment. […]
At present, systems of automation support a lucrative industry and provide cover to governments that prefer to spend money on technology-as-magic rather than grapple with social inequality and dysfunction. This is politically possible because our present system allows for the benefits of the automation revolution to be privatized and the harms to be socialized. […]
Given we live in a digital dystopia of decision-making gone haywire, should we embrace a counter-utopian ideal of being failure free, or a realist rejection of the idea that such a thing is possible? […]
What does it mean for natural language processing that a significant amount of news media in recent decades has been obsessed with the nexus between Islam and terrorism, for example? As Crawford observes, “Datasets in AI are never raw materials to feed algorithms: they are inherently political interventions.” […]
If the purpose of a system is what it does, we need to impart intention into our use of automated technologies by building in systems of rights for those who experience these systems in unintended ways.