Note — Sep 04, 2022

Beyond Hyperanthropomorphism

Quite a long piece by Venkatesh Rao, written in more ‘academic-like’ fashion than my personal preference, meaning citing a lot of other people and ideas to ‘back up’ his argument, sometimes to the detriment of easy parsing.

Rao is proposing a thought experiment where he generously grants some validity to certain hand waving positions (which he calls “philosophical-nonsense”) about some claimed pseudo-traits about AI (“sentience,” “consciousness,” “intentionality,” “self-awareness,” “general intelligence”), and then tries to prove them by digging behind the words and trying to find concepts or data that would support those positions. Needless to say, he fails, proving his point that “hyperanthropomorphic projections” are wrong, waste our time, and promote unfounded fears.

Bear in mind, he doesn’t wave away potential dangers, he waves away intelligence-based imagined dangers. Technologies can still be dangerous, bridges do collapse, a swarm of killer drones would still kill, but not through some form of advanced intelligence.

I’m not going to try to synthesise this too much, it’s worth the effort to read in full. Two main things I’d still like to pull out though. First, he uses “the idea of there being something it is like to be an entity,” which he shortens to “SIILTBness.” “There is something it is like to be a bat. There is something it is like to be a chimpanzee. There is something it is like to be a human.” He spends a great chunk of the article wondering if there is a SIILTBness to AI? Which leads him to the naive case for fear, favourite section of mine.

Second, an argument also made elsewhere about embodiment, which we can better understand through his piece. In short, there is a width and depth of understanding (a bandwidth) to human perception of the world that, combined with the complexity of our brain, creates an understanding of ourselves as selves. If another intelligence doesn’t have that understanding, how can we use our impression of “intelligence” as a shared trait, much less one that can then be compared?

There’s a there there that the pseudo-trait terms gesture at. Our current language (and implied ontology) is merely inadequate to the point of uselessness as a means of apprehension. […]

In other words, to the extent the computer is like the brain, there should be something it is like to be a computer, and we should be able to experience at least some impoverished version of that, and going the other way, there should be something it is like for a computer to experience being like a human (or superhuman). […]

This is dragon-hunting with magic spells based on extrapolating the existence of clouds into the existence of ectoplasm. We’re using two rhyming kinds of philosophical nonsense (one that might plausibly point to something real in our experience of ourselves, and the other something imputed, via extrapolation, to a technological system) to create a theater of fictive agency around made-up problems. […]

The answer is clearly no. The sum of the scraped data of the internet isn’t about anything, the way an infant’s visual field is about the world. So anything trained on the text and images comprising the internet cannot bootstrap a worldlike experience. So conservatively, there is nothing it is like to be GPT-3 or Dalle2, because there is nothing the training data is about. […]

AI is too interesting to sacrifice at the altar of confused hyperanthropomorphism. We need to get beyond it, and imagine a much wider canvas of possibilities for where AI could go, with or without SIILTBness, and with or without super-ness of any sort.