Beyond hyperanthropomorphism ⊗ Mars is irrelevant to us now ⊗ A Machine for Thinking

This week →{.caps} Beyond hyperanthropomorphism ⊗ Mars is irrelevant to us now ⊗ A machine for thinking: How Douglas Engelbart predicted the future of computing ⊗ AI and the limits of language

A year ago →{.caps} A favourite in issue No.186 was Technological Lessons from the Pandemic by Z.M.L.

◼{.acenter}

Quick note: I seem to have gotten around the Gmail sp!m issue, and actually hit some open rate numbers last week that I hadn’t in a while. Beyond the filtering issue, styles were also stripped away, which made the newsletter look a bit shit. You can have a look at properly styled No.230 and No.231 on the website and this one should be back to normal, thanks for your patience.

— Two of the featured articles below are, roughly, about the ‘intelligence’ in ‘Artificial Intelligence.’ Both are, in different ways, about the words we use for what, in the hope of better ‘placing’ what is actually there, so that we can better think of the potentials and dangers. Those reflections around language are important and there are a few good ones around, including those two but also [[model-is-the-message|The model is the message]] from before my summer break. I tend to use the term ‘synthetic’ quite a bit, and that use overlaps with some of their arguments, so I thought I’d recap some ideas here before we get to the articles. Btw, my preference for ‘synthetic’ is very similar to Bratton’s in [[planetary-sapience|Planetary sapience]].

The metaverse is misrepresented and/or largely hype. ‘Synthetic reality’ is more interesting to me. By which I mean recreations of aspects of reality, either to enhance experiences (in video games for example, or art in VR, etc.), or to enhance our ability to represent reality (like in special effects, digital twins, etc.).

Artificial Intelligence is misrepresented and/or largely hype. ‘Synthetic intelligence’ is more interesting to me. By which I mean roughly everything we currently call AI, minus the part where people believe it’s producing something akin to actual intelligence. Yes, that often turns into semantic debates, but the real tools and possibilities are more intriguing to me and powerful enough (now or in the medium term) to be worth investigation and critique, without going into megalomania and fears of Artificial General Intelligence.

Finally, ‘Synthetic media’ is kind of an intersection of both, where AI models are used to create text, images, and videos based on billions of pieces of human-made media.

Beyond hyperanthropomorphism

Quite a long piece by Venkatesh Rao, written in more ‘academic-like’ fashion than my personal preference, meaning citing a lot of other people and ideas to ‘back up’ his argument, sometimes to the detriment of easy parsing.

Rao is proposing a thought experiment where he generously grants some validity to certain hand waving positions (which he calls “philosophical-nonsense”) about some claimed pseudo-traits about AI (“sentience,” “consciousness,” “intentionality,” “self-awareness,” “general intelligence”), and then tries to prove them by digging behind the words and trying to find concepts or data that would support those positions. Needless to say, he fails, proving his point that “hyperanthropomorphic projections” are wrong, waste our time, and promote unfounded fears.

Bear in mind, he doesn’t wave away potential dangers, he waves away intelligence-based imagined dangers. Technologies can still be dangerous, bridges do collapse, a swarm of killer drones would still kill, but not through some form of advanced intelligence.

I’m not going to try to synthesise this too much, it’s worth the effort to read in full. Two main things I’d still like to pull out though. First, he uses “the idea of there being something it is like to be an entity,” which he shortens to “SIILTBness.” “There is something it is like to be a bat. There is something it is like to be a chimpanzee. There is something it is like to be a human.” He spends a great chunk of the article wondering if there is a SIILTBness to AI? Which leads him to the naive case for fear, favourite section of mine.

Second, an argument also made elsewhere about embodiment, which we can better understand through his piece. In short, there is a width and depth of understanding (a bandwidth) to human perception of the world that, combined with the complexity of our brain, creates an understanding of ourselves as selves. If another intelligence doesn’t have that understanding, how can we use our impression of “intelligence” as a  shared trait, much less one that can then be compared?

There’s a there there that the pseudo-trait terms gesture at. Our current language (and implied ontology) is merely inadequate to the point of uselessness as a means of apprehension. […]
In other words, to the extent the computer is like the brain, there should be something it is like to be a computer, and we should be able to experience at least some impoverished version of that, and going the other way, there should be something it is like for a computer to experience being like a human (or superhuman). […]
==This is dragon-hunting with magic spells based on extrapolating the existence of clouds into the existence of ectoplasm. We’re using two rhyming kinds of philosophical nonsense (one that might plausibly point to something real in our experience of ourselves, and the other something imputed, via extrapolation, to a technological system) to create a theater of fictive agency around made-up problems.== […]
The answer is clearly no. The sum of the scraped data of the internet isn’t about anything, the way an infant’s visual field is about the world. So anything trained on the text and images comprising the internet cannot bootstrap a worldlike experience. So conservatively, there is nothing it is like to be GPT-3 or Dalle2, because there is nothing the training data is about. […]
==AI is too interesting to sacrifice at the altar of confused hyperanthropomorphism. We need to get beyond it, and imagine a much wider canvas of possibilities for where AI could go, with or without SIILTBness, and with or without super-ness of any sort.==

Mars is irrelevant to us now

At Farsight, a short but excellent interview with Kim Stanley Robinson on what he can contribute to discussions about the climate crisis (==“Three things: the future as subject for speculation; the syncretic combination of all the fields into a holistic vision of civilisation; and lastly, narrative as a mode of knowing.”==), his fictional ministry for the future, legislation options, clean energy, geoengineering, coops, and some fun chiding of the interviewer at the end.

Geoengineering is a vague term that has been demonised, so it is perhaps not useful to keep using it. Each action proposed has different costs, potential benefits, and potential dangers, so they need to be discussed individually and not as a class. […]
Mars is irrelevant to us now. We should of course concentrate on maintaining the habitability of the Earth. My Mars trilogy is a good novel but not a plan for this moment. If we were to create a sustainable civilisation here on Earth, with all Earth’s creatures prospering, then and only then would Mars become even the slightest bit interesting to us. It would be a kind of reward for our success – we could think of it in the way my novel thinks of it, as an interesting place worth exploring more. ==But until we have solved our problems here, Mars is just a distraction for a few escapists, and so worse than useless.== […]
==[N]ature? You are nature, nature is you. Natural is what happens. The word is useless as a divide, there is no Human apart from Nature, you have no thoughts or feelings without your body, and the Earth is your body, so please dispense with that dichotomy of human/nature, and attend to your own health, which is to say your biosphere’s health.==

A machine for thinking: How Douglas Engelbart predicted the future of computing

Most readers probably know about Engelbart and the Mother of all demos, but this by Steven Johnson is a good read on the topic anyway, with an overview of his life’s work and some of the ‘sceniusian{.internal}’ influences and interconnections, including Bill English, Stewart ‘Forest Gump’ Brand, and how the “Bay Area tech scene lay at the unlikely intersection of three distinct cultural rivers: the intellectuals and scientists in the orbit of Stanford and Berkeley; military funding from DARPA; and the counterculture that had become such a dominant presence in Northern California during the period”.

[A] future device that Bush called the Memex, a machine for augmenting our memories and our intellect, just as telescopes and microscopes had augmented our vision. Bush described it as a kind of “mechanized file or library” where people would someday store their books and documents and correspondence, making “trails” of association between all the data, like paths beaten down through a dense forest of information. […]
==“Man’s population and gross product are increasing at a considerable rate, but the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity.== Augmenting man’s intellect, in the sense defined above, would warrant full pursuit by an enlightened society if there could be shown a reasonable approach and some plausible benefits.” […]
“The personal computer revolution,” he wrote in the 1980s, “turned its back on those tools that led to the empowering of… distributed work groups collaborating simultaneously and over time on common knowledge work.” It wasn’t until the rise of cloud computing and services like Slack and Google Docs that Engelbart’s original vision of collaborative software truly came of age.

AI and the limits of language

If you read one thing about AI this week, go back to the first feature above, but this one by Jacob Browning and Yann LeCun is also quite good if you are tracking the thinking around semantics, “intelligence,” embodiment, and the potential of AI. They say “it isn’t clear what semantic gatekeeping is buying anyone these days,” linking to ‘The mental model is the message,’ which I mentioned above, and which seems dismissive. It’s funny because in my opinion they are mostly on the same side of trying to find a way to talk about AI without getting stuck in erroneous parallels and fabulations. They aren’t saying the exact same thing, but pointing in the same direction.

[L]anguage doesn’t exhaust knowledge; on the contrary, it is only a highly specific, and deeply limited, kind of knowledge representation. […]
All representational schemas involve a compression of information about something, but what gets left in and left out in the compression varies. […]
==It is thus a bit akin to a mirror: it gives the illusion of depth and can reflect almost anything, but it is only a centimeter thick. If we try to explore its depths, we bump our heads.== […]
[T]he deep nonlinguistic understanding is the ground that makes language useful; it’s because we possess a deep understanding of the world that we can quickly understand what other people are talking about.

Asides

{.miscellany}

Your Futures Thinking Observatory