Cathedrals of convention ⊗ Clip art doesn’t come to life ⊗ Could we invest in nature?

No.303 — What monks know about focus ⊗ The end of shared reality ⊗ Inflection is eaten alive by its biggest investor

Cathedrals of convention ⊗ Clip art doesn’t come to life ⊗ Could we invest in nature?
Isolated monastery. Created with Midjourney.

Cathedrals of convention

The author discusses the concept of naturalism, the idea that humans have an impulse to see things that are arbitrary or conventional as natural and essential. He also delves into the connection between language, convention, and naturalness, as well as how these perceptions shape our understanding and use of language. The bulk of the article is spent on showing examples of how people have repeatedly tried to explain language as a natural process that could only have turned out the way it did, then seeing population traits in differences between languages, and validating their beliefs or societies.

Three things I would have liked to see addressed, not that they are missing, just side quests I would have enjoyed. Because we’re in the AI moment, I couldn’t help but wonder if their are lessons, parallels or metaphors to glean in there for our understanding of AI—especially sad to see it’s not included considering the author “is a writer and AI researcher who applies machine learning techniques to Bayesian models of language and vision.”

Second, when it comes to the section on conventions and common knowledge, I’d be very curious to see how that works with bilingual people, especially those for whom the two (or more) languages coexist in the same conversations. Finally, I’m also wondering how words transition from one meaning to another and how that happens in common knowledge, for example when ‘sick’ goes from from being ill, to being fantastic (‘sick move,’ ‘sick show’). Or any other term like it transitioning or adding a meaning.

Nothing is arbitrary, and everything has an explanation that ties human concerns into the fabric of nature. It’s not that a savage person is like a wolf, it’s that they are literally a wolf, whose savage essence is laid bare. Metamorphosis, instead of metaphor. […]

So how does information become common knowledge? It is a question that is particularly striking when the convention is language. Because after all, languages are the cathedrals of conventions. Each is a vast relationship between the form of words (sequences of them, really) and the information that those words convey. […]

Language as a whole, a much more elaborate piece of common knowledge, evolves by a similar mechanism. Each time someone speaks to us, the choices of words, their intonation, the idioms they use, and so on are presupposing a language, which we accommodate. But everyone else is doing the same, accommodating the language we produce.

Clip art doesn’t come to life

It’s too bad that in this essay Eryk Salvaggio skips a distinction early, which he makes later, but it takes it from fantastic to merely great. There are a lot of great phrasings and angles to explain LLMs, generative AI, and the business models of the leading companies in the field, first and foremost OpenAI.

The distinction I mention is between models/companies who consider themselves in the early stages of AGI vs LLMs built for precise use cases. His whole (correct) argument is that those companies are using chatbots and image generators to draw attention and convince everyone that the only way to make these products and sustain these companies is by ingesting ever larger amounts of data, of course with no regard to rights, and not much more for what they might do to society.

Along the way Salvaggio caricatures current generative AI as clipart-generating or even, in a weird twist, as storage. All along I was thinking ‘yes, but inventing drugs, unfolding proteins, etc.’ He comes to it in the end, pointing to the value of narrow models. “That’s a different kind of approach to building AI—it’s narrow—though it was wrapped in a general LLM. That’s a more promising example, for me, than throwing all the data in the world in a bucket and expecting an understanding of the world to emerge.”

All of that to say, great read, worth your time, just don’t argue with the first part because he gets to an important point later, and even if you know the topic well, it will help your understanding of it.

Fundamentally, this “AI world model” frame is a political frame. It’s used to assert a certain kind of politics about the world and its meanings: that the world is fundamentally data, and that if we have access to enough data, we can recreate models of this world close enough to our own that a data-driven agent can interface with our physical reality seamlessly. […]

Language recombination, shaped by statistical probability, is different from understanding the world, and the language we use to describe it. What these companies built are successful at arriving at an output that simulates real thought, but does not reproduce real thinking. […]

People reap clearer benefits from highly specialized, narrowly focused systems rather than attempting to build a “one-size-fits-all” system, which is what AGI assumes. […]

We would do better to focus on “aligning” the data collected for these things to the purposes to which they are meant to serve. But OpenAI’s “alignment” strategy focuses on the outcomes of this process, fixing things that emerge from the model. It frames big tech as the only capable actor for determining how an AGI system is used.

Nature has value. Could we literally invest in it?

I’m pretty sure you could easily take this article from The New York Times, shuffle some paragraphs, changes a few phrases and take it from largely positive to largely negative. I’d likely be much more aligned with the new version. The idea behind “natural asset companies” is that they would aim to assign a market value to ecosystems for the purpose of protecting and preserving nature. They would work with landowners to license ecosystem services and generate revenue streams.

The project featured in the piece was eventually pulled back, following heavy pushback from conservatives and environmental groups. Definitely not shared as an endorsement, more as a topic to keep an eye on.

Such a company doesn’t yet exist. But the idea has gained traction among environmentalists, money managers and philanthropists who believe that nature won’t be adequately protected unless it is assigned a value in the market — whether or not that asset generates dividends through a monetizable use. […]

There is also pushback, however, from people who strongly believe in protecting natural resources, and worry that monetizing the benefits would further enrich the wealthy without reliably delivering the promised environmental upside.

§ What monks know about focus. “A book is a tool. It’s a machine for thinking. And ‘all machines,’ as Thoreau once said, ‘have their friction.’ The time it takes to engage with ideas—whether factual or fictional, emotional or intellectual, accurate or inaccurate, efficient or inefficient—might strike some as a drag. But the time given to working through those ideas, adopting and adapting, developing or discarding, changes our minds, changes us. Related; perhaps this is an old man yelling to get off his lawn moment, but people saying they “read“ a book when they listened to the audio book still makes me twitch. Not judging per se, it’s just… weird?

§ Kate Middleton and the end of shared reality. “But synthetic media seems poised to act as an amplifier—a vehicle to exacerbate the misgivings, biases, and gut feelings of anyone with an internet connection. It’s never been easier to collect evidence that sustains a particular worldview and build a made-up world around cognitive biases on any political or pop-culture issue. It’s in this environment that these new tech tools become something more than reality blurrers: They’re chaos agents, offering new avenues for confirmation bias, whether or not they’re actually used.

Futures, Fictions & Fabulations

Winning combinations for successful strategic foresight
“Strategic foresight can be an invaluable tool for organizations to anticipate potential challenges and opportunities and challenges, and better prepare for the future. But what is the ideal set-up to achieve specific outcomes?”

The Future of Finance is Female
“The Future Laboratory partnered with Allied Irish Banks (AIB) on a foresight report that reveals how women are pioneering a financial system change. It was covered by nearly 20 national and international media outlets – and was read or viewed by over half the adult population in Ireland.”

Getting to grip with collapse (PDF)
Article in COMPASS by Andrew Curry. “Collapse is a subject that is of interest to archaeologists, historians, anthropologists, environmentalists, philosophers, literary critics, narratologists, system theorists, and others, as well as to futurists.”

Algorithms, Automation, Augmentation

After raising $1.3B, Inflection is eaten alive by its biggest investor, Microsoft
“Co-founders Mustafa Suleyman and Karén Simonyan will go to Microsoft, where the former will head up the newly formed Microsoft AI division, along with “several members” of their team as Microsoft put it — or “most of the staff,” as Bloomberg reports it. Reid Hoffman will stay behind with new CEO Sean White to try to salvage what’s left of the company.”

Warning over use in UK of unregulated AI chatbots to create social care plans
What’s the long German word for ‘unsurprising yet disappointing nonetheless’? “A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care.”

An AI-driven “factory of drugs” claims to have hit a big milestone
“Zhavoronkov says his drug is special because AI software not only helped decide what target inside a cell to interact with, but also what the drug’s chemical structure should be.”


Your Futures Thinking Observatory