In this relatively long read, Emily M. Bender is precisely deconstructing and critiquing Steven Johnson’s piece on AI for NYT Magazine. TL;DR: He drank too much of the OpenAI Koolaid and left his journalist’s notebook at the door. It’s a good read where Bender takes every uncritical bit and explains why it’s wrong and/or participates in the legend being built around “AI.” Useful as a lens for further reading on the topic, and if you want to pay attention to how narratives (and futures) are built and slowly become ‘the truth.’
It also made me realise anew why critiques of the technology and gung-ho proponents are so misaligned; to quite a degree, they are not talking about the same thing. The former are (I’m greatly caricaturing here) saying “we know of various social and tech issues, can this maths-based technology be helpful in reversing them?” While the latter are thinking “let's see what this super cool thing that feels like scifi can do, we’ll figure the rest out later (maybe).”
[T]he skeptics framing seems to shift the burden of proof away from those who claim to be doing something outlandish (building “AGI”) and towards those who call out the unfounded claims. […]
[T]he relevant question is not “how do we build ‘AI’?” but rather things like “How do we shift power so that we see fewer (ideally no) cases of algorithmic oppression?”, “How do we imagine and deploy design processes that locate technology as tools, shaped for and in the service of people working towards pro-social ends?”, and “How do we ensure the possibility of refusal, making it possible to shut down harmful applications and ensure recourse for those being harmed?” […]
Talking about “teaching machines values” is a fundamental misframing of the situation and a piece of AI hype. Software systems are artifacts, not sentient entities, and as such “teaching” is a misplaced and misleading metaphor (as is “machine learning” or “artificial intelligence”). Rather, as with any other artifact or system, the builders of these systems are designing values into them. […]
Just because that text seems coherent doesn’t mean the model behind it has understood anything or is trustworthy. Just because that answer was correct doesn’t mean the next one will be When a computer seems to “speak our language”, we’re actually the ones doing all of the work.