The mythology of conscious AI ⊗ The discourse is a Distributed Denial-of-Service attack

No.387 — Research as a form of pattern disruption ⊗ 10 things I learned from burning myself out with AI coding agents ⊗ Kim Stanley Robinson, science fiction maestro and utopian ⊗ What our Blue Planet really looks like

The mythology of conscious AI ⊗ The discourse is a Distributed Denial-of-Service attack
The mythology of conscious AI. Created with Midjourney.

A high proportion of the positive recommendations I get about this newsletter go something like this: “I look forward to your Sentiers each Sunday. It is something I enjoy and find inspiration from over my morning coffee!” (Thanks Todd!) So, make a cup of your favourite coffee, find a comfortable seat, and enjoy the reads, I think both features hit entirely different but important topics to ponder. As always, replies are more than welcome and yes, if you have a moment to share with a friend or colleague, every bit helps.

To expand a bit on the comments linked above, maybe James Hoffmann needs to have some kind of curated list of recommended reads to enjoy coffee with. An entirely self-serving idea, I know. I’d both love to be on the list and read through it.


The mythology of conscious AI

Long, fascinating piece by neuroscientist Anil Seth. First, a slight side trip. A couple of weeks ago, I shared a post where Robin Sloan argued that AGI has already arrived. There was a bit of hubbub in the replies in my inbox. A lot of the discourse had to do with one of two things; using the “historical” understanding of AGI, which has kind of meant human equivalent and was not what Robin wrote about. The second has to do with language, i.e. what we use the word “intelligence” for. Reading this mythology piece, I realised (finally?) that for many of us—possibly for most of us if we stopped to think about it—we tend to equate intelligence with consciousness, or at least the two are so close to each other in our minds, that when thinking about the I in AI we actually have both words in mind. They are, of course, not the same. “Intelligence is the ability to achieve complex goals by flexible means,” while “consciousness, in contrast to intelligence, is mostly about being.”

Back to the piece itself, Seth argues that conscious AI remains highly unlikely because consciousness probably requires biological life, not just computation. He challenges computational functionalism—the assumption that implementing the right algorithms suffices for consciousness—by demonstrating how psychological biases (anthropomorphism, the seductive power of language like “AI hallucinations”), the brain-as-computer metaphor’s limitations, and the existence of non-computational processes (continuous dynamics, stochastic phenomena, electromagnetic effects) all undermine claims that silicon can replicate consciousness. Real brains exhibit deep multiscale integration where individual neurons engage in metabolism and self-maintenance that resist clean separation between function and substrate. One insight: every acknowledged conscious entity is alive, suggesting consciousness connects fundamentally to biological self-regulation and the thermodynamic imperative to resist entropy rather than abstract information processing.

Beyond the scientific argument, Seth offers a cultural critique of Silicon Valley’s conscious AI pursuit. He identifies how financial incentives drive some researchers’ enthusiasm for machine consciousness rather than rigorous evidence, framing the entire enterprise as “techno-rapture”—a Promethean fantasy about transcending biological limits and escaping mortality. This mythology exploits exponential growth rhetoric to create psychological pressure toward believing imminent breakthroughs despite scant evidence. His simulation-versus-instantiation distinction clarifies the stakes: computational models lack the causal powers of what they model, just as simulating digestion doesn’t digest.

I’d draw your attention to something he says in passing, where he identifies a practical implication that might be more pressing than the theoretical debate: “even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions.” Meaning that the appearance of consciousness could prove “good enough” and fundamentally shape human-AI interaction, regardless of the underlying reality.

But here’s where the trouble starts. Inside a brain, there’s no sharp separation between “mindware” and “wetware” as there is between software and hardware in a computer. The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon. […]

There is, therefore, something of a contradiction lurking for those who invest their dreams and their venture capital into the prospect of uploading their conscious minds into exquisitely detailed simulations of their brains, so that they can exist forever in silicon rapture. If an exquisitely detailed brain model is needed, then you are no more likely to exist in the simulation than a hailstorm is likely to arise inside the computers of the U.K. meteorological office. […]

Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism. […]

Computational simulations generally lack the causal powers and intrinsic properties of the things being simulated. A simulation of the digestive system does not actually digest anything. A simulation of a rainstorm does not make anything actually wet. If we simulate a living creature, we have not created life.

The discourse is a Distributed Denial-of-Service attack

Great piece by Joan Westenberg and at the same time an example of a part of the problem. What she’s writing about isn’t new, I’ve myself used the DDoS analogy multiple time in trying to describe our current information predicament. But, as she explains, understanding something and not just having an opinion takes time. Thus, she’s “late” to the issue. However, here “late” actually means a well argued point, a strong position, and a reflection you should take the time to sit with.

Westenberg argues that endless controversies exhaust our collective cognitive capacity through sheer volume, preventing the sustained attention required for deliberative thinking. By the time we marshal resources for careful analysis, the conversation has moved on and we’re already several outrages behind, forcing us into permanent reactive mode. False information spreads faster—70% more likely to be retweeted—precisely because it’s simpler and emotionally compelling, whilst truth requires cognitive work we cannot afford under this constant bombardment.

The discourse transforms understanding into mere positioning, rewarding confidence over competence, gradually rewiring participants into incapable thinkers unable to step back from the flood. Just like you’re not in traffic, you are traffic, now we also “do this shit to ourselves. We are our own botnet.” According to the author, the only viable response is deliberately stepping back to deeply understand one topic rather than frantically positioning on everything, reclaiming the capacity for actual thought over perpetual reaction.

The discourse takes the most important problem of our time and converts it into an infinite series of tribal skirmishes, each of which generates heat and engagement while bringing us no closer to answering any of the actual hard questions. […]

You can have a position on something without understanding it, and you can understand something without having a confident position on it. […]

The philosopher Bertrand Russell remarked that the fundamental cause of trouble in the world is that the stupid are cocksure while the intelligent are full of doubt. […]

When many ideas compete for limited attention, the ideas that are best at capturing attention win, and those that aren’t good at it die out. This creates selection pressure toward attention-grabbing content, which tends to be extreme, emotional, simple, tribal, and visceral. The ideas that survive aren’t the most true or useful. They’re the most viral. […]

But the discourse hates expertise. Or rather, it puts experts in an impossible position. To engage with the discourse, an expert has to compress their nuanced understanding into takes that can compete with the confident nonsense being spouted by random accounts with anime avatars.


Futures, Fictions & Fabulations

  • Research as a form of pattern disruption. “To spot weird signals, you need to go down rabbit holes. Follow your intuition. And remember, pursuing rabbit holes is not always an act of procrastination. Sometimes, it’s simply your mind telling you to follow your curiosity. Weirdness can present itself at any given moment, through any medium.”
  • Three Narratives for the Future of Work. “That is why, when asked whether I am optimistic or worried about the future of work, my answer is deliberately uncomfortable: I refuse the binary. I do not think we should be ‘optimists’ or ‘pessimists.’ We should be prepared.”
  • CES 2026 trends. “Explore VML’s top takeaways from CES 2026 – from AI and humanoids to health spans and wearable tech that’s shaping the future”

Algorithms, Automations & Augmentations

  • 10 things I learned from burning myself out with AI coding agents. Fifty projects later, I’ll be frank: I have not had this much fun with a computer since I learned BASIC on my Apple II Plus when I was 9 years old. This opinion comes not as an endorsement but as personal experience: I voluntarily undertook this project, and I paid out of pocket for both OpenAI and Anthropic’s premium AI plans.”
  • Anthropic Economic Index report: Economic primitives. “These ‘primitives’—simple, foundational measures of how Claude is used, which we generate by asking Claude specific questions about anonymized Claude.ai and first-party (1P) API transcripts—cover five dimensions relevant to AI’s economic impact: user and AI skills, how complex tasks are, the degree of autonomy afforded to Claude, how successful Claude is, and whether Claude is used for personal, educational, or work purposes.”
  • OpenAI to test ads in ChatGPT as it burns through billions. Enshittification proceeding as expected. “The move represents a reversal for CEO Sam Altman, who in 2024 described advertising in ChatGPT as a ‘last resort’ and expressed concerns that ads could erode user trust, although he did not completely rule out the possibility at the time.”

Built, Biosphere & Breakthroughs

Asides

  • This Ocean Map shows what our Blue Planet really looks like. “The world is 71 per cent ocean, but you wouldn’t know it from looking at a standard world map, what’s great about the new Ocean Map is that it encourages us to consider the world from a different perspective, one which reclaims the importance of the ocean on which we all depend.” (I looked for this map after seeing this visualisation of Earth’s surface, which was linked in the Robinson interview above.)
  • Powering change: a visual journey into China’s green transition. “The exhibition showcased aerial photographs of China’s renewable energy landscape—solar farms, wind turbines, and hybrid energy projects—alongside stories of people and communities living amid the country’s massive energy transformation.”
  • Back-scratching bovine leads scientists to reassess intelligence of cows. Missed opportunity by the titler, this should have been “Back-scratching cow leads to head-scratching scientists.” “Scientists have been forced to rethink the intelligence of cattle after an Austrian cow named Veronika displayed an impressive – and until now undocumented – knack for tool use.”

Your Futures Thinking Observatory