Benjamin Bratton and Blaise Agüera y Arcas for NOEMA magazine, in part about the LaMDA/Blake Lemoine/sentience curfuffle, but actually a lot more interesting than an engineer hoodwinking himself. They make a few important points, first that maybe “[w]e need more specific and creative language that can cut the knots around terms like ‘sentience,’ ‘ethics,’ ‘intelligence,’ and even ‘artificial,’ in order to name and measure what is already here and orient what is to come.” Second, that even if LaMDA is not doing what Lemoine thinks it’s doing, it’s still intriguing that the AI ‘knows’ enough about language to reply in ways that makes him think it’s sentient. In other words, it’s not sentient but the fact it manages to ‘fake it’ is intriguing. They go in a number of other directions, including synthetic media and what can be done right now, they details their “seven problems with synthetic language at platform scale,” and more. Well worth a read.
I want to take specific note of the second point above, that beyond the debunking of sentience remains something promising. Bratton has done this same kind of exercise a few times, one that stick to my mind is the distinction between surveilling and measuring. He believes (my words), and I tend to agree, that capitalistic surveillance is entirely different than measuring at a global scale for the purpose of planetary governance. In both cases, the important work of properly comprehending, analyzing and sometimes pushing back on technologies must not prevent us from recognizing the potential and redirecting discourse so that ‘we’ don’t throw the baby with the bath water.
Here they discuss AI, sentience, and the philosophy of AI, elsewhere it was data and surveillance, but the important thread is that better understanding, better language to discuss new things, better perspective to contemplate potential risks and rewards, and a cool head are all important. People are bullshitting left and right, but incredible technologies are being developed and can help ‘higher purposes’ than profit or an engineer looking for a friend.
LaMDA is instead constructing new sentences, tendencies, and attitudes on the fly in response to the flow of conversation. Just because a user is projecting doesn’t mean there isn’t a different kind of there there. […]
As Large Language Models, such as LaMDA, come to animate cognitive infrastructures, the questions of when a functional understanding of the effects of “language”— including semantic discrimination and contextual association with physical world referents — constitute legitimate understanding, and what are necessary and sufficient conditions for recognizing that legitimacy, are no longer just a philosophical thought experiment. […]
Strongly committed as we are to thinking at planetary scale, we hold that modeling human language and transposing it into a general technological utility has deep intrinsic value — scientific, philosophical, existential — and compared with other projects, the associated costs are a bargain at the price. […]
[S]ome may find themselves dismissing or disallowing other realities that also constitute “AI now:” drug modeling, astronomic imagining, experimental art and writing, vibrant philosophical debates, voice synthesis, language translation, robotics, genomic modeling, etc. […]
[T]he ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead construct more nuanced vocabularies of analysis, critique, and speculation based on the weirdness right in front of us.