On the origin of minds ⊗ Is it okay? ⊗ Is AI really thinking and reasoning?
No.346 — Cultural intelligence in the Age of AI ⊗ AI to “Neutralize the Accent” of Indian Employees ⊗ Small Vehicles of Shanghai

On the origin of minds
The first article in the last issue was about defining intelligent life, the first one this week, on the origin of mind, is by Pamela Lyon who was heavily quoted in the first. That might feel a bit redundant. But I just find the vertigo/awe mix of reading about these discoveries and questions fascinating, even more so when one focuses on what we don’t know. In an era where ‘we’ are trying to reinvent intelligence while living through the anthropocene, I find it humbling yet inspiring to contemplate both the vastness of diversity in life, and the breadth of what we don’t know.
Lyon argues that Darwin’s insight into the continuity of mental evolution challenges the brain-centred focus of cognitive science, suggesting that cognition is a fundamental biological process shared across all life forms. This perspective reveals that cognitive capacities such as perception, memory, and decision-making existed long before complex nervous systems evolved, as demonstrated by behaviours in bacteria and unicellular organisms. Lyon also emphasises the importance of understanding the “domain of interactions” between organisms and their environments, which she believes is essential for a more accurate account of cognition.
This single shift of perspective – from a brain-centred focus, where Homo sapiens is the benchmark, to the facts of biology and ecology – has profound implications. The payoff is a more accurate and productive account of an inescapable natural phenomenon critical to understanding how we became – and what it means to be – human. […]
Perception, memory, valence, learning, decision-making, anticipation, communication – all once thought the preserve of humankind – are found in a wide variety of living things, including bacteria, unicellular eukaryotes, plants, fungi, non-neuronal animals, and animals with simple nervous systems and brains. […]
Yet we still don’t have a good grip on the fundamentals of cognition: how the senses work together to construct a world; how and where memories are stored long term, whether and how they remain stable, and how retrieval changes them; how decisions are made, and bodily action marshalled; and how valence is assessed. […]
‘There is grandeur in this view of life,’ Darwin writes, and he is correct. We can now see ourselves – with scientific justification and with no need for mystical overlay or anthropomorphism – in a daffodil, an earthworm, perhaps even a bacterium, as well as a chimpanzee. We share common origins. We share genes. We share many of the mechanisms by which we become familiar with and value the worlds that our senses make.
Is it okay?
Robin Sloan explores the implications of language models, trained, as we know, on the vast commons of human writing, which he refers to as “Everything.” Acknowledging and then putting aside each side’s most obvious arguments, Sloan asks if the potential gains for humanity, that LLMs might help achieve, are greater than the losses that might result from this capture of humanity’s common inheritance. In other words, “is it okay?”
He suggests that if these models could lead to groundbreaking scientific advancements—what he refers to as “super science”—such as curing diseases or solving complex global challenges, then the extensive use of the written commons might be morally defensible. In this context, the potential benefits to humanity could outweigh concerns about the appropriation of digital content.
On the other hand, if the primary application of LLMs is to generate content that competes with or diminishes human creativity—such as producing media that overshadows original works by artists, writers, and other creators—then the practice of training these models on the tech commons may not be justifiable.
Robin doesn’t mention it, but it’s basically the exercice the Amish do before adopting a new technology. What are the benefits for the community, does it help/reinforce or dissolve how we live together? In this case, do LLMs have the potential to bring enough benefits to society? Of course, techbros didn’t ask the question first, so we now have, out in the world, both progress towards great discoveries, and the slop machines.
Very much worth a read for the details of the argument and a useful overview of the tension and relation between “AI” and our written words.
Reasonable people can disagree about how the value [of the technology vs the training data] breaks down. While I believe the relative value of Everything in this mix is something close to 90%, I’m willing to concede a 50/50 split. […]
Can’t these companies simply promise, with every passing year, that AI super science is just around the corner … and meanwhile, wreck every creative industry, flood the internet with garbage, grow rich on the value of Everything? Let us cook—while culture fades into a sort of oatmeal sludge. […]
Maybe (it turns out) I’m less interested in litigating my foundational question and more interested in simply insisting on the overwhelming, irreplaceable contribution of this great central treasure: all of us, writing, for every conceivable reason; desire and action, impossible to hold in your head.
Is AI really thinking and reasoning — or just pretending to?
Another piece by someone who’s increasingly one of my favourite tech writers, Sigal Samuel at Vox. AI models are increasingly touted by companies as capable of genuine reasoning—breaking problems into smaller parts and solving them step by step. Unsurprisingly, experts remain divided; skeptics argue that these models often rely more on memorisation and heuristics than true reasoning, while believers assert that they are indeed making strides in reasoning capabilities.
This also leads to the notion of “jagged intelligence,” according to which AI can excel in some tasks while struggling in others. As in the previous essay, where it’s kind of both yes and no, or the one before where “cognition” is a fluid term defined in various ways by different people, here the lesson is that “the best way to think of AI is probably not as ‘smarter than a human’ or ‘dumber than a human’ but just as ‘different.’” Though it might sound like a bit of a flat conclusion, considering how human-like some results can be and how much companies try to make their products sound human, the best approach is to always remember they are “other,” to consider what you use them for, and how you evaluate the results.
“I think a lot of what it’s doing is more like a bag of heuristics than a reasoning model,” Mitchell told me. A heuristic is a mental shortcut — something that often lets you guess the right answer to a problem, but not by actually thinking it through. […]
“They do it in a way that doesn’t generalize as well as the way humans do it — they’re relying more on memorization and knowledge than humans do — but they’re still doing the thing,” Greenblatt said. “It’s not like there’s no generalization at all.” […]
“The AI models are like a student that is not very bright but is superhumanly diligent, and so they haven’t just memorized 25 equations, they’ve memorized 500 equations, including ones for weird situations that could come up,” she said. They’re pairing a lot of memorization with a little bit of reasoning — that is, with figuring out what combination of equations to apply to a problem. “And that just takes you very far! They seem at first glance as impressive as the person with the deep intuitive understanding.”
§ HALTUNG: On developing cultural intelligence in the Age of AI. “This is where the power of ‘HALTUNG’ in an AI age becomes clear. While computational algorithms excel at pattern recognition and replication, they operate within strict logical constraints. Human taste, by contrast, functions as a self-aware system that understands not just patterns but context. Not just rules but their cultural significance.”
Futures, Fictions & Fabulations
- From disruption to transformation. “The AAA Framework–Antifragile, Anticipatory and Agility— is at the centre of his work. This scalable model applies equally to individual organisations and whole systems, as standard playbooks and legacy organisational models are becoming increasingly ineffective.”
- Future of animal wellbeing “The five scenarios in the Wilberforce Report are the output of a participative and creative process. They were developed in collaboration with different voices and perspectives on animal wellbeing, ranging from politicians to journalists, animal rights campaigners to academics, food tech companies to animal welfare lawyers.”
- Five futures where the US ended not with a bang but a whimper. “Sometimes empires just kind of fall apart over time—no catastrophe required.” Posting this for no specific reason at all.
Algorithms, Automations & Augmentations
- World’s Largest Call Center Deploys AI to “Neutralize the Accent” of Indian Employees. “Fresh off the heels of the AI-powered accent adjustments in the Oscar-nominated 2024 film ‘The Brutalist,’ the French company that owns the largest call center in the world has announced that it's using similar technology to “soften” its India-based agents’ accents.”
- Scandi robot servant Neo Gamma is dressed head-to-toe in beige knitwear. I’m shocked, shocked! That Elon’s robots look like killers and the Norwegian ones look like they’re wearing pyjamas. “The household helper was designed to complete a variety of tasks from tidying and vacuuming to doing the laundry, with an integrated AI system allowing it to approximate human speech and body language.”
- Generative AI tool marks a milestone in biology. “Trained on a dataset that includes all known living species – and a few extinct ones – Evo 2 can predict the form and function of proteins in the DNA of all domains of life and run experiments in a fraction of the time it would take a traditional lab.”
Built, Biosphere & Breakthroughs
- Small Vehicles of Shanghai — DeepSeeking the city. (I’ve just skimmed it so far, but it’s one of Dan Hill’s patented extra long and sure to be insightful reads so I feel safe in recommending it right away.) “How cheap, open, distributed and diverse systems, like the electric motor, could transform the bruteforced city and our approach to AI, via the possibility of ‘intermediate technologies with a human face’.”
- Costa Rica is saving forest ecosystems by listening to them. “Monitoring the noises within ecosystems reveals their health—allowing researchers to monitor changes in biodiversity, detect threats, and measure the effectiveness of conservation strategies.”
- First global study of animals as architects of Earth. “Harvey and colleagues counted nearly 500 wild species and five domesticated livestock where scientists have documented their ability to influence the shape of the landscape, from the lowly ant to the African elephant.”
Asides
- Thutmose II: Last undiscovered tomb of Tutankhamun dynasty found. “Archaeologists have found the last undiscovered royal tomb of the 18th Egyptian dynasty, which included the famous pharoah Tutankhamun. The uncovering of King Thutmose II's tomb marks the first time a pharoah's tomb has been found by a British-led excavation since Tutankhamun's was found over a century ago.”
- Framework wants to fix the budget laptop with its first touchscreen machine. I quite like what the company is going, and they also shipped a possible new direction for gaming desktops.
- NASA’s new telescope will create the ‘most colourful’ map of the cosmos ever made. “It is an infrared telescope designed to take spectroscopic images – ones that measure individual wavelengths of light from a source. By doing this it will be able to tell us about the formation of the universe, the growth of all galaxies across cosmic history, and the location of water and life-forming molecules in our own galaxy.”