Geist in the machine ⊗ The prospect of Butlerian Jihad

No.397 — Ending the AI arms race ⊗ Insight and future-fit decisions ⊗ Better AI creative collaborators ⊗ Progressive Paris ⊗ The asteroid ryugu and the building blocks of life

Geist in the machine ⊗ The prospect of Butlerian Jihad
Geist in the machine. Created with Midjourney.

Geist in the machine

This is one of those articles that I find tougher to follow, juggling multiple philosophers’ thesis as the author does. It’s worth the effort though. Peter Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we” win a perfectly executed “Stieglerian Revolution” (I just made that up, see the next essay) and end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open.

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account.

The prospect of Butlerian Jihad

Super piece by Liam Mullally, who uses Herbert’s Dune and the Butlerian Jihad as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons.

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.


§ Ending the AI arms race: why safer futures are still possible & what you can do to help. I can’t really start sharing a Nate Hagens interview or essay every week, and I’ve already mentioned him a few times recently. But I’ll still do a quick share here, for this excellent chat with Tristan Harris, in part for this bit on doom, which I’ll keep in mind. “We can see the truth, and we’re not seeing that because we’re trying to be doomers. You’re seeing that so that you can try to be honest, and it’s the deepest form of optimism to look that truth in the eye and say, and now here’s what we’re gonna do instead.


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers

Futures, Fictions & Fabulations

  • Futures Intelligence — Closing the gap between insight and future-fit decisions. “A new integrative capability that brings different forms of future-related insights in one connected sense-making flow to turn them into shared, decision-ready understanding”
  • 2026 AXA Foresight Report. “From dense megacities facing increasing pressures on resources and infrastructure to coastal regions where ocean dynamics reshape risks and opportunities, to agricultural areas adapting to shifting demographics and economic conditions, each territory comes with its own set of transformations. Our exploration focuses precisely on how these futures emerge and unfold differently depending on the territory.”

Algorithms, Automations & Augmentations

  • Stanford scholars train generative AI to be better creative collaborators. “The conversation around AI and art generally swings between two extremes: A flood of AI slop or the total automation of creative work. The more desirable approach may be an AI that behaves as a useful collaborator.”
  • The cognitive costs of AI. “In the space of two years, the discourse around AI and knowledge work has produced an entire family of concepts: Cognitive Offloading, Cognitive Debt, Cognitive Atrophy, Cognitive Drift, Cognitive Surrender. Each more alarming than the last. In sequence they read like an escalation. That escalation is worth examining.”
  • Local opposition is slowing AI data centers. Wall Street has noticed. “A lot of the commitments and the build-out of data centers where it’s easy has kind of been done, so you’re getting marginally more difficult. From a markets perspective, expectations might be, maybe not reset, but realigned with the fact that it’s hard to put a couple trillion dollars in the ground in a short time.”

Built, Biosphere & Breakthroughs

Asides

Your Futures Thinking Observatory