Civilizational optionality ⊗ The social edge of intelligence

No.399 — The term “AGI” is almost useless at this point ⊗ Frugal AI ⊗ Design futures in infrastructure ⊗ The AI revolution in math has arrived ⊗ First Indigenous Group to ban data centers from its land ⊗ A macro array of colorful slime molds

Civilizational optionality ⊗ The social edge of intelligence
Civilizational optionality

Civilizational optionality

A few weeks back I watched a talk by Indy Johar at the Long Now Foundation and then one by Kate Crawford (links below). I thought I’d do a members’ issue featuring those two but haven’t gotten around to it, like so many ideas. Thankfully, they’ve published a new shorter piece by Johar, on civilizational optionality, which is basically a more focused excerpt of his longer talk. I often talk about inventing futures, whatever the form that endeavour takes, here Johar is talking about preserving futures.

He explains that civilisation’s most pressing strategic task is not solving discrete crises but preserving what he calls “civilizational optionality,” the degrees of freedom that allow societies to adapt across multiple possible futures, rather than narrowing into a single brittle trajectory. The framing distinguishes optionality from longtermism: longtermism prioritises civilisational continuity, which can be achieved with a thin slice of humanity intact; optionality requires that plural developmental pathways remain open. The urgency comes from what Johar describes as a forced “recoupling,” externalities like climate breakdown, soil loss, and hydrological disruption, long treated as separate from the economic operating model, are now feeding back as active destabilisers across food, energy, legitimacy, and security systems simultaneously. I quite like this, especially in opposition to the much dreamt about decoupling of GDP and energy consumption/carbon emissions. The essay works through ten such logics, each one a different vector through which the recoupling is already manifesting.

The practical proposal is to fund not solutions but the conditions under which solutions remain possible: stable food and water systems, legitimate institutions, and governance architectures capable of holding long-horizon commitments. He calls these “exstitutional wrappers,” coordinating structures that operate outside existing institutions, since no single institution can hold this kind of multi-generational responsibility. The essay closes with a self-correcting clause: any such structure must be designed to revise its own assumptions, or it risks becoming the thing it was built to prevent—a system that forecloses futures in the name of protecting them.

More → His conference I mentioned above, Civilizational Optioneering and Kate Crawford’s, Mapping Empires.

Through humans, machines, biological systems, and their entanglements, this stored [fossil fuel] energy has been mobilized into a planetary-scale cognitive system: distributed sensing, modeling, and acting capacities across biological, technological, and institutional substrates. The first photographs of the whole Earth were images of that system perceiving itself. […]

What we are now entering is a phase of forced recoupling: the externalities are no longer “outside” the operating model. They are re-entering the system as active constraints and destabilizing feedbacks. Carbon becomes heat stress and food volatility. Plastics become endocrine risk. Biodiversity loss becomes disease dynamics. Hydrological disruption becomes energy instability. […]

Even where the most critical typologies of optionality collapse are visible — glaciers, heat, soil, hydrology, fertility, legitimacy — the places most exposed are often not where effective prevention or optionality expansion can be financed, governed, and executed at speed and scale. […]

In a world where wealth is systemically entangled with the continuity of civilization itself, optionality becomes a foundational asset class: not one among many, but the first-class asset. Without allocation to optionality, wealth becomes terminally exposed to collapse as systems spiral toward zero-sum dynamics and mutually assured destruction.

The social edge of intelligence

I’ve often written and spoken about the gap between disappearing (to some extent) junior roles and senior ones. How do you become senior if the whole system training you to that level collapses? We’ve also looked to AI as collective knowledge, as synthesis of what humanity knows (yes, a subset, for sure). We’ve also looked into model collapse—which I prefer calling Habsburg AI, as per Jathan Sadowski. Here Bright Simons brings together these concepts by arguing that the intelligence embedded in AI systems is not primarily a function of architecture or compute; it is a function of the social complexity of the civilisations whose language those models absorbed—the argumentation, institutional friction, and collaborative problem-solving that left linguistic traces worth learning from. As organisations offload cognitive work, eliminate junior roles, and reduce the messy human-to-human interaction that produces rich language, that “substrate” degrades. Simons calls this the Social Edge Paradox: the technology’s own deployment undermines the conditions that made it possible, endangers it’s continued progress, and our own.

The mechanism operates at civilisational scale. Human collaboration, argumentation, institutional friction—the social processes that produce expertise and contested knowledge—generated the rich linguistic record that made training useful models possible. AI deployment that substitutes for that interaction, rather than scaffolding it, doesn’t just deskill individuals; it progressively impoverishes the social substrate from which future training data draws. The linguistic traces of genuine social reasoning thin out, and models trained on what remains inherit statistical averaging rather than the argumentative complexity of civilisation. The studies Simons cites show this operating at the organisational level already: consultants using GPT-4 performed 19% worse on tasks requiring contextual judgment; early-career employment in AI-exposed fields has dropped 13% since 2022. These are early readings of a longer civilisational process.

Two directions Simons doesn’t address, that I’d throw in there. The first is AI as sparring partner: human-to-human dialogue has qualities that LLM interaction cannot replicate, but working with an LLM rather than alone does preserve something, some friction, some counterpoint. Whether that is enough to meaningfully slow the substrate degradation remains to be seen, but to my mind it’s a counter force. The second is synthetic data, which some researchers position as a path around the training data wall. Simons doesn’t address it, though perhaps his argument absorbs it: synthetic data addresses quantity, not the social complexity of what gets generated. More tokens of statistically averaged output do not reconstitute the rich disagreement that fed the original models.

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from. […]

The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible. […]

Language is often mistaken as an information pipe, but it is really a social coordination technology. […]

Getting intelligent minds to sync around an issue and work towards a common cause has always been the hallmark of human mental effort, whether it is raising giant pyramids or landing on the moon. A complex vision must radiate into the hive-mind to generate an interconnected consciousness that takes us from the solitary genius of apples falling on scientific heads to finally defying earthly gravity en route to Mars.


§ The term “AGI” is almost useless at this point. Some weeks ago I shared Robin Sloan’s AGI is here (and I feel fine) and got some pushback. Although the titles seem to oppose each other, I think this piece by Helen Toner is an excellent “follow up” and makes clearer everything Robin was saying. “But that’s changed. Today’s best AI systems are good enough that they’re now inside the fuzzy conceptual cloud of ‘AGI-ish’: that is, they’ve surpassed some people’s definitions of AGI, while falling well short of others’. As a result, talking about “AGI’ is no longer a helpful way to gesture in a rough direction—instead, it’s likely to make some people think you mean one thing, and others imagine something totally different.”


§ Frugal AI helps countries priced out of Big Tech. This might be my new favourite LLM term, “Frugal AI.” “This is perhaps the most important dimension of frugal AI, […] it is about building leaner, more efficient systems from the ground up. By design, the systems use less compute, less memory, and less energy, which directly translates into a smaller carbon footprint.”


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers

Futures, Fictions & Fabulations

  • Tobias Revell: Design futures in infrastructure. “Introduces Arup Foresight’s approach to helping organisations think and act more effectively in conditions of deep uncertainty. The talk frames futures thinking as a critical, designerly practice that goes beyond prediction, using scenarios, worldbuilding and speculative design to surface assumptions, stress test decisions and make long term change tangible today.”
  • The Protopian Prize. Incredible list of judges and advisors. “A fiction contest inviting you to share your vision of people working toward liberatory futures, meeting obstacles, and making real change. ‘Protopian’—a word coined by Kevin Kelly, one of our contest’s judges—means an achievable, optimistic future characterized by continuous, incremental progress rather than revolutionary leaps or a static, perfect state. Protopian stories imagine a future that is neither flawless nor catastrophic, but instead workably better than today. It’s about plausible progress rather than perfection or collapse.”

Algorithms, Automations & Augmentations

  • The AI revolution in math has arrived. “While no single new result is a world-beating breakthrough, some of them are on par with discoveries published in professional mathematical journals. In some cases, algorithms formulate a conjecture, prove it, and verify the proof with minimal human intervention. In others, extensive chats with large language models such as ChatGPT, Claude, or Gemini lead to novel proof strategies.”
  • The 2026 AI Index Report. “The AI Index offers one of the most comprehensive, data-driven views of artificial intelligence. Recognized as a trusted resource by global media, governments, and leading companies, the AI Index equips policymakers, business leaders, and the public with rigorous, objective insights into AI’s technical progress, economic influence, and societal impact.” They also published Inside the AI Index: 12 Takeaways from the 2026 Report
  • India’s frugal AI startups Sarvam and Krutrim build sovereign models. Not the same piece as the one above the link blocks! “The new book “LeanSpark” examines frugal innovation in India, including how startups Sarvam AI and Krutrim overcome cost and infrastructure constraints.”

Built, Biosphere & Breakthroughs

Asides

  • Barry Webb documents a marvelous, macro array of colorful slime molds. “This fungi-like form is one of hundreds of kinds of slime mold, and it typically only reaches a height of about two centimeters at the most. Thanks to Webb’s macro photos, we glimpse a phenomenally beautiful world up-close that is otherwise virtually invisible.”
  • Pejac transforms basic graph paper into detailed, trompe-l’œil tableaux. “[The artist] often turns to the precise geometry of gridded sketchbooks in order to challenge perception and think instead about depth and movement.” (Via Kottke.)
  • Underwater volcano eruption. “We went on an expedition to capture Kavachi, one of the world’s most active underwater volcanoes, erupting beneath the Pacific Ocean in the remote Solomon Islands. This short cinematic piece showcases selected field cinematography captured during an expedition to the Solomon Islands. Steam explosions, sulfur-rich plumes, and superheated seawater collide in one of the most extreme environments on Earth.” (Also via Kottke.)

Your Futures Thinking Observatory