Newsletter No.267 — Jun 04, 2023

Thought Experiment in the National Library of Thailand ⊗ The Insufficient Weirdness Hypothesis ⊗ Seeing Beyond the Beauty of a Vermeer

Want to understand the world & imagine better futures?

Also this week → ‘I do not think ethical surveillance can exist’ ⊗ G7 nations must preserve the West’s presence on the world stage ⊗ Design futures seminar ⊗ Retro-futurist landscapes inspired by rock idols and writers

Thought experiment in the National Library of Thailand

Love this thought experiment by Emily M. Bender. They (the post is riffing off of a paper co-written with Alexander Koller) make a strong case that current AIs—I’m assuming that will be the case for a while with LLMs—don’t really understand the meaning of what they are saying. It’s literally just advanced probabilistic prediction of sequences of words (the article only tackles language). Astonishing results sometimes, but no understanding of the meaning of what’s in there. Bender explains her library-based thought experiments, followed by some common replies, and her own rebuttals.

I’m not arguing with them, per se, I trust their expertise to think about this way more than I do engineers’. But what if at this scale there’s another way to find meaning? The whole argument presented, in the end, is based on how we humans have done it, but we don’t know what we could figure out over millions of years. That’s what an AI does, it goes over an amount of information that it would take one person millions of years to do, and thus something no one has ever done. AlphaGo made moves no one expected, it compressed time and studied way more games than anyone had ever managed to do and some different moves emerged. (Venkatesh Rao calls it superhistory, as seen in No.173.)

I’m not saying it’s the case, but I’m really wondering what if something like an understanding of meaning emerges at that scale? What if it’s not just a difference in scale but in quality?

Because it models those distributions very closely, it is good at spitting out plausible sounding text, in different styles. But, as always, if this text makes sense it’s because we, the reader, are making sense of it. […]

Nonetheless, when we see a language model producing seemingly coherent output and we think about its training data, if those data come from a language we speak, it’s difficult to keep in focus the fact that the computer is only manipulating the form — and the form doesn’t “carry” the meaning, except to someone who knows the linguistic system. […]

It doesn’t matter how “intelligent” it is — it can’t get to meaning if all it has access to is form. But also: it’s not “intelligent”. Our only evidence for its “intelligence” is the apparent coherence of its output. But we’re the ones doing all the meaning making there, as we make sense of it.

Algorithms, Automation, Augmentation → Paragraphica. Love this project by Bjørn Karmann! “The camera operates by collecting data from its location using open APIs. Utilizing the address, weather, time of day, and nearby places. Combining all these data points Paragraphica composes a paragraph that details a representation of the current place and moment.”A photographer embraces the alien logic of AI. “Charlie Engman’s experiments with Midjourney have yielded fleshy distortions, peculiar make-out sessions, and unfamiliar pictures of his mother.”Watch this Nvidia demo and imagine actually speaking to AI game characters.

The insufficient weirdness hypothesis

Jon Evans is one of my favourite writers on AI but this issue of his newsletter is actually about futures. Jon is a programmer, science fiction writer, and currently works at the forecasting platform Metaculus. So he knows a thing or two about thinking about the future and here he runs us through his “insufficient weirdness hypothesis,” which I recommend reading through. In short; look at today as the future of your past self from 20-30 years ago. Some things are as expected, but a lot of it is very weird indeed.

He argues that the current doomsday alerts on AI are not weird enough, things will change in unexpected directions and we can’t worry too much about a precise version of that far in the future (using “AI will kill us all,” in this case). My summary might make it sound like he’s saying “do nothing” but it’s not the case, he’s just reminding us that we just don’t know and there’s a greater variety of potential futures to consider.

Perhaps strangely, I was reminded of a chat with a friend a couple of years ago who didn’t start a podcast six or seven years before that because it was already “too late,” which of course it wasn’t. He lived (lives) so on the edge of technological changes that it seemed old hat to him, while it was actually still early days. Some AI pundits and engineers might be doing the same thing. The future is not directly what you are imagining from your current narrowly-focused work, it will most likely be something else entirely, you’re just too close to that one part.

Our future is going to seem really weird! We know this because a) the future has consistently seemed that way to its past for a considerable time now, b) the causes of this weirdness — the ever-tighter interconnection of humanity, the increased ease and speed with which butterfly-wing emergent properties spread across the world — are only accelerating and intensifying. […]

I provisionally define “weird” as “the jarringly unexpected, especially when referring to the results of previously implausible/unlikely juxtapositions and/or events or forces significantly influenced by what had been unknown unknowns.” […]

Does this mean we can’t forecast the future at all? Absolutely not! But it does mean that visions of the future which do not include great weirdness and unknown unknowns — ones which simply grimly extrapolate from today’s ephemeral trends — are guaranteed wrong. I call this the Insufficient Weirdness Hypothesis. […]

The whole point of the Insufficient Weirdness Hypothesis is that mere intuition and extrapolation are wildly insufficient for planning for our weird future.

Futures, foresights, forecasts & fabulations → Design Futures (Futures Seminar). “Advisor & Futurist Lovisa Volmarsson together with Tobias Revell (Design Futures Lead at Arup) and Phil Balagtas (Design Director at Habitat) delved into the concept of ‘futures thinking’ and how we through design can explore extraordinary images of tomorrow and urgent examinations of the many questions facing us today.” ⊗ Lovely! Retro-futurist landscapes inspired by rock idols and writers – in pictures. “Maxine Gregson’s artworks – which have been described as “nostalgic futurism” – combine postcards and magazines bought on eBay, as well as her own photography, with snippets of lyrics and literature.”Dear future … We made this for you. An article about IDEO’s speculative work over the years. ⊗ A brief history of futures, a paper from 2015 by Wendy Lynn Schultz.

Seeing beyond the beauty of a Vermeer

Teju Cole for The New York Times on his visit of the great Vermeer exhibition, but more importantly on what he sees in great paintings. What is the scene representing but also what each object, fabric, look, pose, tells us about the society that surrounded the painter. Where did that pearl come from? Or that fur, that colour pigment? Recommended for what Cole is expressing, to better appreciate Vermeer, but also just because the man can write and it’s worth taking the trip.

My relationship with art has changed. I look for trouble now. No longer is a Vermeer painting simply “foreign and alluring.” It is an artifact inescapably involved in the world’s messiness — the world when the painting was made and the world now. […]

“Vermeer seems almost not to care, or not even to know, what it is that he is painting. What do men call this wedge of light? A nose? A finger? What do we know of its shape? To Vermeer none of this matters, the conceptual world of names and knowledge is forgotten, nothing concerns him but what is visible, the tone, the wedge of light.” […]

Any work of art is evidence of the material circumstances in which it was produced. The very best works of art are more than evidence. Inside a single frame, within a single great painting, complicity and transcendence coexist. […]

This is why, finally, one goes to museums: for the chance to learn to see again, to see beauty, to see trouble. […]

His paintings (and those by others; the implications of this argument are not limited to Vermeer) cannot be taken as mere decorations or technical achievements. They contain the knowledge of their own sorrow and can tolerate more honest context than we often allow them.

‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI “Moral outsourcing, she says, applies the logic of sentience and choice to AI, allowing technologists to effectively reallocate responsibility for the products they build onto the products themselves – technical advancement becomes predestined growth, and bias becomes intractable.”

The End Of History Club “As an alliance of open societies, the G7 must see its primary role going forward not as pushing the end of history onto unwilling others, but as striving to preserve and sustain the active presence of the West on a world stage it doesn’t command as it once did.”