Don’t think of AI chatbots as people ⊗ Shared reality will self-destruct ⊗ What if the Post Office had its own AI model?
No.372 — What’s guiding our Regenerative Futures? ⊗ The way we think about the future needs to change ⊗ AlphaEarth tracks Earth’s changes ⊗ England’s ice-age ghost ponds

Why it’s a mistake to think of AI chatbots as people
We need to recognize that we have built an intellectual engine without a self, just like we built a mechanical engine without a horse.
I don’t often say must read but this is one, to advance your understanding of AI. It’s also a piece of a kind we don’t see enough of; strong critique of a technology, while allowing for merits/usefulness. Too many critics pound their fists on the table for what’s wrong and don’t leave room for what works. It’s kind of dishonest, but more importantly it makes their valid discourse less credible. I’m not saying every article needs to “both side” everything, there’s something in the tone and some details that makes a difference in seeming an absolutist or not.
Benj Edwards explains that, although AI chatbots often feel like consistent people, they’re “vox sine persona,” pattern‑predicting models with no persistent self or agency. The conversational interface is engineered: each reply is generated anew from training‑data patterns plus system prompts, injected memories and randomness, not from an enduring mind. That personhood illusion can cause real harm, vulnerable users may develop delusions or receive dangerous reassurance, and companies can hide responsibility behind a fictional personality.
Edwards argues, as I often do, that we should treat LLMs as tools to amplify our thinking, craft our prompts deliberately and keep using our judgement, rather than deferring to a convincing but unconscious voice. Note: If you know AI well, you might have some “yes, buts” as you read, keep at it, he addresses them.
Reading this I realised a parallel with AI that hadn’t occurred to me before. Practitioners of futures and foresight (the good ones anyway) don’t make predictions, they assess and show possibilities. AI doesn’t give a definitive answer, it gives a plausible continuation of your question. As Edwards says, “you’re interacting with a system that generates plausible-sounding text based on patterns in training data, not a person with persistent self-awareness.“
I’m not saying they are the same, of course, but both are probabilistic, not deterministic. We humans—or at least contemporary ones, perhaps older societies were better at dealing with probabilities—aren’t all that good at understanding both in the same day, much less switching constantly between the two.
Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood. […]
Research has found that personality measurements in LLM outputs are significantly influenced by training data. OpenAI’s GPT models are trained on sources like copies of websites, books, Wikipedia, and academic publications. The exact proportions matter enormously for what users later perceive as “personality traits” once the model is in use, making predictions. […]
The error is in assuming that thinking requires a thinker, that intelligence requires identity. We’ve created intellectual engines that have a form of reasoning power but no persistent self to take responsibility for it. […]
When you stop seeing an LLM as a “person” that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine’s processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator’s view as authoritative. You are providing direction to a connection machine—not consulting an oracle with its own agenda.
More → Also by Edwards and related: When ‘no’ means ‘yes’: Why AI chatbots can’t process Persian social etiquette. “New study examines how a helpful AI response could become a cultural disaster in Iran.”
Our shared reality will self-destruct in the next 12 months
I haven’t shared a Ted Gioia article in a while. This is a good example why, some very smart observation and a potent case being made. But the seeming certainty is a bit annoying, he’s not dealing in possibilities, he writes as if he’s stating “facts.” Regardless, the central worry is real enough and an important issue with LLMs going forward.
He explains that the rapid advancement of technology now allows for the creation of fake audio, video, and text that can irrevocably alter our perception of reality, making it increasingly difficult to distinguish truth from deception. The resulting crisis will deeply affect social cohesion and individual psychology, fracturing the shared benchmarks of truth that society relies on. As a result, new roles such as “custodians of reality” may emerge to validate events and media, helping to restore trust.
Most discussions of this issue focus on the technology. I believe that’s a mistake. The real turmoil will take place in social cohesion and individual psychology. They will both fracture in a world where our shared benchmarks of truth and actuality disappear. […]
It will get worse—and very soon. This will impact every sphere of society: education, healthcare, law enforcement, religion, etc. Even at this early stage of reality denial, we are seeing the fallout, but it’s tiny compared to what’s ahead. […]
But our ability to deal with the problems does not evolve as fast as the technology. We struggle to adapt—and this makes us highly vulnerable. […]
In fact, I have a hunch that the next BIG thing in tech just might be fixing the mess created by the current BIG thing in tech.
§ Reading and writing with AI. I went overboard answering a question on LinkedIn and committed a whole article where I talk about how we research, write, and read with AI—or not. Includes some tricks and recommended apps and services to leverage AI as a thinking assistant—it’s a members’ Dispatch sent Friday, unlocked for a short while. By the way, I’m considering doing an AMA (Ask Me Anything) on this kind of use of AI as it relates to reading, writing, and Personal Knowledge Management. Interested?
§ What if the Post Office had its own AI model? “[LLMs] could be anything, with no requirement that they scale up to billions of users immediately, or generate income from addicted users or eager managers--and if we could collectively determine the direction of their development and research--what else might they be?”
§ What’s guiding our Regenerative Futures? “However, recent discussions by Jason W Moore, Andreas Malm and others offer a critique of this concept in making the case for the Capitalocene as a more precise term. Rather than treating humanity as a homogenous force as Anthropocene theory does, the Capitalocene examines how differences in responsibility, power and agency within societies have been compounded in the context of the capitalist system, and how this system has driven ecological crisis.”
Futures, Fictions & Fabulations
- The way we think about the future needs to change. “How do you feel about the future? When I ask people this question, I get a consistent mix of responses: apprehension, dread, worry, resignation and perhaps some curiosity. These sentiments are widely shared: in a global survey of 10,000 young people aged 16-25, conducted in 2021, 75% said that the future is frightening, while 56% said they think humanity is doomed.”
- Using the Future - Contributions to the Field of Foresight. “In a world where traditional decision-making frameworks are showing their limitations, the need is growing for knowledge, tools, and processes that enable more effective long-term thinking in decision-making.”
- Envisioning three tomorrows. “To answer these questions and help leaders capture some of the value in motion across the decade ahead, PwC has quantified the potential impact of these forces on the global economy of 2035. To envision 2035 as realistically and usefully as possible, we focused on a trio of plausible global scenarios: three divergent tomorrows.”
Algorithms, Automations & Augmentations
- DeepMind’s AlphaEarth tracks Earth’s changes. “An AI model that treats Earth like a living dataset, tracking crop cycles, coastlines, urban expansion, melting ice, and much, much more. AlphaEarth weaves together disparate data streams, from satellite imagery and sensor data to geotagged Wikipedia entries, into a unified digital representation that scientists can probe to uncover patterns unfolding worldwide.”
- OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws. I was today years old (ok, Thursday) when I learned the Computer World still exists! “In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.”
- We urgently call for international red lines to prevent unacceptable AI risks. “Launched during the 80th session of the United Nations General Assembly, this call has broad support from prominent leaders in policy, academia, and industry.”
Built, Biosphere & Breakthroughs
- ‘It’s resurrection’: 1,000-year-old seeds could grow ancient plants in England’s ice-age ghost ponds. “New surveys by Sayer’s team have revealed that 22 of the ghost ponds restored since 2022 now support 136 species of wetland plant. This represents 70% of the wetland flora found in more than 400 ponds on Norfolk Wildlife Trust’s Thompson Common, an internationally important nature reserve whose ponds have survived since the ice age.”
- Chemists turn plastic waste into carbon capture material. “Researchers have developed a new chemical process that transforms discarded PET bottles into an efficient, durable carbon-capture sorbent—tackling both plastic pollution and greenhouse gas emissions at once.”
- US and China race to mine Moon’s $19M-per-kilogram helium-3. “The competition to reach the Moon's south pole has intensified dramatically in recent weeks, with the United States and China racing to establish dominance over a region rich in valuable resources including water ice and rare helium-3 isotopes. Acting NASA Administrator Sean Duffy recently declared America's ‘manifest destiny to the stars’ during an internal briefing, emphasizing the urgency of beating China in what officials are calling a second space race.” Arseholes.
Asides
- Engineering LEGO cars to climb increasingly tall walls. Production quality, creativity, humour, suspense, this video has everything. “They’re really about science and engineering — trial and error, repeated failure, iteration, small gains, switching tactics when confronted with dead ends, how innovation can result in significant advantages.”
- Cuba’s Distinctive Architecture Glows in Vibrant Photos by James Kerwin. “Based in Istanbul, a city also renowned for its architecture, Kerwin channels a fascination for history and the built environment, especially the way time makes an indelible mark on structures left exposed to the elements.”
- Childcraft’s How and Why Library. “The encyclopedia business was booming across the 20th century, enough so that the publishers of the World Book encyclopedia expanded into creating a kid-focused version in 1934."”