The future of being human ⊗ For the sake of mutual interdependence

No.389 — The space trash apocalypse ⊗ Jeff Bezos, moral cretin ⊗ The futures cone reimagined ⊗ The Authoritarian Stack ⊗ Local energy networks saving lives ⊗ Tangible media

The future of being human ⊗ For the sake of mutual interdependence

The future of being human

Indy Johar argues that as prediction and optimisation (LLMs) become infrastructure—embedded in pricing, access, ranking, and the allocation of attention—what becomes scarce isn’t computational power but something else entirely: attention that can settle without extraction, relationships that form without accounting, uncertainty that doesn’t collapse into anxiety, and the ability to become, “without being prematurely named, scored, or fixed.” With too much optimisation, legibility becomes the condition through which resources and access are allocated, so people learn to make themselves readable. The hidden cost is that this compresses what can’t be represented without being diminished. Akin to “the map is not the territory,” what isn’t measured is ignored.

This isn’t nostalgia for a pre-digital world, not the anologue trend. Johar proposes a set of categories he calls “pre-legibility zones” and “opacity commons”—public and semi-public spaces designed so that capture isn’t default and identity performance isn’t the price of entry. These are “bounded worlds” where the right to remain partially unknown is treated as a civic affordance, with what he calls “selective legibility”: opacity by default, proportional accountability, consentful revelation. The argument extends to “machine-assisted rewilding,” where technology actively creates space for irreducibility rather than increasing capture. What makes this compelling is that it’s not about retreat—it’s the naming of something important, kept and rewilded for its importance, within the existing world.

To him, the future of being human isn’t the opposite of machine intelligence but its complement—the institutions, environments, and practices that ensure prediction doesn’t become total formatting, that optimisation doesn’t flatten the conditions of meaning, that intelligence doesn’t reduce life to what can be scored.

In a way, it’s kind of Chatham House Rule for life. It also reminded me of Clive Thompson’s piece Rewilding your attention, shared in No.285 over five years ago! Johar’s “practical doctrine” also reminded me of “gevulot” in Hannu Rajaniemi’s Quantum Thief, which allows each person to decide what information about them will be available to others.

Selective legibility is the middle path between two failures: total capture, which corrodes formation and agency, and romantic opacity, which can shelter harm. The aim is not to disappear. The aim is to make life livable: to allow becoming, while being held. […]

It can also mean an anti-optimisation layer: systems that introduce friction where extraction would otherwise be automatic; that detect when environments are becoming too capturing; that enforce norms of non-instrumental interaction; that protect the right to opacity and the right not to be continuously translated into signal. […]

But there is another coupling available: machines that actively create space for irreducibility—systems that reduce capture rather than increase it, that preserve unpriced time, that protect attention as a right, that enable encounter without turning it into data. […]

The invitation is to begin unfurling: to prototype the conditions that allow thicker forms of life to re-enter the everyday; to create spaces where micro-communication can return; to defend the right to opacity as a civic affordance; to design selective legibility as a livable doctrine rather than an abstract principle; to explore machine-assisted stewardship as an institutional stance rather than a moral aspiration.

Achieving independence for the sake of mutual interdependence

It’s pretty uncanny sometimes how articles align. It’s usually on purpose of course, but once in a while it just happens. I saved this piece because it’s an interview with LM Sacasas and for me Sacasas = save to Reader on sight. As soon as I started reading though, the parallel was right there. Johar was proposing a rebalancing of optimisation technology, where it’s a complement to humans, not an extractive overlord. Where we can decide when we are measured, or not, where we regain agency.

In the case of Sacasas, he draws on Ivan Illich’s concept of “convivial” tools—human-scaled technologies that empower rather than disable—to argue for a restrained relationship with technology that preserves human agency and interdependence. Illich, building on Jacques Ellul’s critique of technological society, saw industrial-age institutions as counterproductive once they pass certain thresholds: they create dependencies instead of freedom, outsourcing competencies like navigation (GPS replacing maps) or communal practices like burials to professional classes. He argues that we must ask what is good for humans to do regardless of whether machines can do it better, since building skills and depending on each other creates the threads that weave communities together.

The goal here isn’t rugged individualism but achieving autonomy for the sake of mutual interdependence—communities with the strength to order their lives according to their values. Hannah Arendt’s vision of the world as gift, and her example of the table as ideal technology (bringing people together while maintaining distinction), frames a different orientation: receiving the world with gratitude rather than treating it as a field for engineering solutions.

But my chief problem with the rhetoric of inevitability was that it was deployed by those who wanted to foreclose our thinking and judging. It doesn’t want us to think about whether this would be a good development or not for us. Often, it was assumed that it would be good — the new device, the new efficiency, the new mode of optimization — but good for what and good for whom? Maybe good for the bottom line of a company. Maybe good in discrete ways for some individuals. But many of these tools have not been good for us. […]

But Gay suggested that we are formed by the habits implicit in our economic structures, political structures, and scientific technological structures. As we participate in those structures at a pre-rational level, we are being shaped and formed by them. […]

Tools are not just an expression of our desires, but they form our desires. Tools are not just an expression of our agency, but constrain and empower our agency. […]

One of the trends implicit in the technological structures of modernity is that they isolate us. They make it difficult to form the moral communities of deliberation and practice that can help us slow down, think, and make choices.


§ 003: The space trash apocalypse you haven’t been thinking about. Excellent discussion between Radha Mistry and Tobias Revell, primarily about the Kessler syndrome, which “describes a situation in which the density of objects in low Earth orbit (LEO) becomes so high due to space pollution that collisions between these objects cascade, exponentially increasing the amount of space debris over time.” Tobias wonders if we might not already be in it. Good point. Some crazy people seem to be working on it, since SpaceX seeks authority from the FCC to launch and operate a constellation of up to one million satellites as orbital data centers.” As I said on Bluesky, “In my opinion, it’s not at all impossible that Melon Husk’s name ends up going down in history not as a real life Tony Stark, but as the guy who caused the Kessler effect that ruins space for everyone.”


§ Jeff Bezos, moral cretin. “No, the thing that has changed is that Jeff Bezos has developed a political agenda. He is on Team Billionaire. Team Billionaire thinks that billionaires are brilliant, wise, and omnicompetent. It can’t stomach leaving journalists in charge of the journalism, because surely the billionaire owner has better instincts and deeper insights. Team Billionaire thinks the public needs to stay in line and respect their betters. Team Billionaire thinks the government should stay on the sidelines (at least until its bailout time, that is).”


Futures, Fictions & Fabulations

  • The futures cone reimagined: A framework for critical and plural futures thinking. “This article critically re-examines the Futures Cone, a foundational but frequently misapplied tool in foresight practice. Often treated as a forecasting method or creative prompt, the Cone is reframed here as a relational and epistemic scaffold that only gains meaning through reflective, participatory processes.” That being said, we don’t talk enough about the future burrito.
  • Foresight 2026 Roland Berger China Annual Trends Report. “This report provides trend analysis and in-depth insights into key industries, such as Automotive, Civil Economics, Consumer Goods and Retail, Health, Energy, Industrial Products and Services, and Technology. Additionally, this year's report delves into several major hot topics, including China's potential unleashed in a new world order, artificial intelligence, Chinese companies' international expansion, transaction and investor services, new quality productive forces, and sustainability, aiming to stimulate thought and provide valuable insights to industry stakeholders.”
  • A Manifesto for Future Cities. “Reflections on future cities point to a deeper issue: cities are not only struggling with what kind of futures they are heading toward, but also with how to consciously move away from paths that no longer serve them and collectively define their future, with well-being emerging as a meaningful—yet hard to operationalize—compass for urban development.”

Algorithms, Automations & Augmentations

  • The Authoritarian Stack. “How tech billionaires are building a post-democratic America — and why Europe is next. … Under the banner of ‘patriotic tech’, this new bloc is building the infrastructure of control—clouds, AI, finance, drones, satellites—an integrated system we call the Authoritarian Stack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules.”
  • International AI Safety Report 2026 | International AI Safety Report. “The second International AI Safety Report, published in February 2026, is the next iteration of the comprehensive review of latest scientific research on the capabilities and risks of general-purpose AI systems. Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, the report is backed by over 30 countries and international organisations. It represents the largest global collaboration on AI safety to date.”
  • AI in science research boosts speed, limits scope. “As individual scholars soar through the academic ranks, science as a whole shrinks its curiosity. AI-heavy research covers less topical ground, clusters around the same data-rich problems, and sparks less follow-on engagement between studies.”

Built, Biosphere & Breakthroughs

Asides (archive week I guess?)

Your Futures Thinking Observatory