Why science fiction can’t predict the future ⊗ The straw, the siphon, and the sieve

No.388 — The Field Guide to Design Futures ⊗ Mozilla recruits partners to take on AI goliaths ⊗ Hamburg combats loneliness with ”culture buddies” ⊗ Art veteran uses Gen Z slang

Why science fiction can’t predict the future ⊗ The straw, the siphon, and the sieve
Why science fiction can’t predict the future. Created with Midjourney.

Why science fiction can’t predict the future (and why that’s a good thing)

When I found it in the Reactor newsletter last week, I almost put this piece by Ken Liu in the futures “link block” below. Good thing I didn’t, it’s well worth featuring, and the read. Scifi, technology, retro-futures, history, paths taken and not taken, metaphors. Lots woven in there.

Liu asserts that science fiction fails at prediction but succeeds as mythology. The genre’s abysmal track record—no flying cars, no post-nuclear Terminators, no sentient HAL 9000—doesn’t prevent its concepts from shaping how we talk about technology. Terms like “Big Brother” and “cyberspace” describe contemporary reality, but the real systems bear little resemblance to their fictional origins. Our surveillance state arose through voluntary privacy trades for convenience, cobbled together from tech companies, advertisers, governments, bots, bad laws, and our own imperfections. Orwell crafted a powerful metaphor, not an accurate prediction.

History is a stumble through competing possibilities. Around 1900, 40% of American cars ran on steam, 38% on electricity, 22% on gasoline. Steam seemed reliable, electricity had Edison backing better batteries, while gas cars were loud, dirty, and required dangerous hand-cranking. Hundreds of plausible reasons explain why gas won—cheap Texas oil, failed EV rental models, the electric starter, Henry Ford’s determination—but the actual sequence involved random events, wars, bankruptcies, and cultural shifts that nobody could foresee. We mistake the consequences of winning for its causes, weaving triumphalist narratives that make the present seem inevitable. Science fiction authors face an impossible task: they work before the breakthrough that decides which competing solution wins. They can only guess and construct plausible worlds around their guess, turning effects into causes through character arcs and moral resolution.

This failure becomes the genre’s strength. Science fiction creates myths—thinking machines questioning their makers, immortality through genetic code, autonomous houses—that give us tools to understand a world where technology dominates our evolutionary future. Liu quotes Le Guin: Mary Shelley released Frankenstein’s monster, and nobody has shut him out since. The monster sits in our modern living rooms because myths don’t vanish under scrutiny. Read as prediction, Frankenstein fails, but prophecy was never the point. Silvia Park’s robot children in Luminous offer another variation of the Abdicating Parent archetype. The specific scenario matters less than the framework it provides for understanding the fraught relationship between mortals seeking immortality through generation. Science fiction’s wrongness provides hope—there are no laws dictating the future, and dystopia arrives only if we build it. The metaphors endure long after predictions crumble, becoming vocabulary for making sense of an impossible present and constructing an unimaginable future.

The prospective view, in that moment before the breakthrough, when all the potential solutions are vying for attention, is completely different from the retrospective view, long after the breakthrough technology has transformed the world and secured its own triumphalist narrative. Survivorship bias, confirmation bias, selection bias, hindsight, narrative fallacy, wishful thinking, arrogance… there are countless names for the cognitive biases humans exhibit when we try to tell the story of the past from our place in the present, and we must constantly remind ourselves that the way it is is not the way it has to be. […]

By crafting entertaining stories, authors invent powerful metaphors that shape how we imagine our technological future and understand our technological reality. These metaphors are why science fiction matters. […]

These are all metaphors that allow us to make sense of a world in which the products of our imagination and craft, technology and invention, increasingly dominate not just our own evolutionary future, but the future of the planet as a whole. We live in a world in which the possibility field is growing ever grander, and new myths are needed to make sense of it. […]

These modern myths become part of our vocabulary, the framework and tools with which we make sense of the impossible present and then construct the unimaginable future.

Technology and wealth: the straw, the siphon, and the sieve

Nate Hagens also works with metaphors. In this essay he argues that technology doesn't create wealth (usable energy, organized matter, and the stocks and flows that make life possible, viable, and enjoyable), it extracts it faster. He uses three metaphors to explain how technology functions at scale. The straw represents how technology accelerates drawdown of natural resources, turning stocks into flows. Fracking exemplifies this: we access oil more quickly without finding more of it, getting closer to the slurping sound at the bottom of the milkshake. The siphon describes how gains concentrate as technology scales. Network effects and capital requirements favour early movers and large players, creating chokepoints that allow platform owners to extract value simply by controlling access. The sieve filters wealth away from other species and toward humans, particularly a small subset. The technosphere now outweighs all living biomass, redirecting 40% of Earth’s net primary productivity to human use.

Hagens extends this framework to debt, which functions as social technology that pulls future resources into present consumption while concentrating returns through interest payments. AI amplifies these dynamics in the cognitive realm. Like fossil fuels multiplying physical labour, AI scales pattern recognition and coordination at near-zero marginal cost once trained. This accelerates extraction of attention and decision space, enlarges the concentration of returns to platform owners, and optimises for current metrics without accounting for soil health or ecosystem stability. Hagens doesn’t claim to have solutions, and instead closes with ten questions about scale, responsibility, and what constitutes real wealth. These questions can help us investigate when tools become destabilising, who absorbs costs off balance sheets, and whether we’re borrowing from the future while calling it innovation. His framing suggests speed itself might be a risk variable rather than an unquestioned good.

What would our economy look like if “wealth” meant the continuity of flows rather than the liquidation of stocks? Sunlight, rain, soil fertility, functioning ecosystems – not just this quarter’s output. […]

What would change if we treated speed as a risk variable rather than an unquestioned virtue? What would shift if slower systems weren’t seen as failures, but as systems with time to notice mistakes? […]

But when a technology works, it spreads – especially in a globally-interconnected economy. When it spreads, it scales. And when technology scales across whole economies and decades, its role and impact changes. At the macro scale, technology acts as a set of tools that lets us pull “more” from the world per unit time as an economy and a species. […]

In that sense, AI steepens the same gradients we’ve already been riding, creating more throughput, more concentration, and less time and awareness to notice what’s being lost. […]

At what point does scale change the moral and physical meaning of a tool? When does something that helps at a village level become destabilizing at a planetary one?


§ Stubborn Optimism, tending your inner fire, and why hope is not enough with. Nate Hagens again, this time interviewing the fantastic and inspiring Xiye Bastida. I loved her views on activism, on centering nature, but also on futures. “So that’s one of my theories of change. If it doesn’t exist, I build it and I build it the way I would like to live in the future because I’m practicing the future today.


Futures, Fictions & Fabulations

  • The Field Guide to Design Futures. Heck of a list of contributors + me ;-). “There is something inherently fascinating yet reassuring about manuals. They promise results as long as you follow steps and recipes that you can easily replicate and apply every time you need them. The Field Guide to Design Futures works on a different premise: you build your own understanding of what Design Futures and futures thinking are and design accordingly by selecting, assembling, scraping, and skimming different entries, voices, and contributions that make up this volume.”
  • What’s between, between?. “The exhibition takes Gulf Futurism as its starting point—a term that emerged to describe the unique experience of rapid transformation across the Arabian Peninsula, where hyper-modernization and clashing visual cultures create a distinctive sense of living between multiple temporalities. It captures the dizzying collision of histories with futures, luxury malls alongside desert landscapes, and centuries-old traditions coexisting with cutting-edge technology.”
  • The Future 100: 2026. “Amidst this ‘metamorphic’ current, the desire for human connection remains unmistakable. In the year ahead, human impulse will shape brand strategies, influencing the top marketing trends in 2026, and draw people back to immersive, high-impact experiences that demonstrate the value of authenticity and unlock infinite possibilities.”

Algorithms, Automations & Augmentations

  • Mozilla recruits partners to take on AI goliaths. “The company is putting together ‘a rebel alliance of sorts,’ … The goal is to make AI more trustworthy while offering a counter to massive players like OpenAI and Anthropic. ‘It’s that spirit that a bunch of people are banding together to create something good in the world and take on this thing that threatens us, [i]t’s super corny, but people totally get it.’”
  • How the world lives with AI: findings from a year of global dialogues. “Through seven rounds of deliberation with more than 6,000 people across 70 countries, we’ve built recurring infrastructure to learn how the world actually lives with AI—what people use it for, whether they trust it, and how it is changing their daily lives.”
  • Why India’s plan to make AI companies pay for training data should go global. “A license fee for the use of copyrighted data can compensate creators and help AI companies avoid lengthy legal fights.”

Built, Biosphere & Breakthroughs

Asides

  • National gallery of art veteran uses Gen Z slang in viral videos. Legend! “Myers and Mary King, the museum’s social media copywriter, wrote a script by pulling words from a spreadsheet they created full of Gen Z jargon. … Luchs speaks five languages: English, French, Italian, and some German and Russian. She approached grasping Gen Z parlance like she was learning another language. King coached Luchs in pronouncing the words.”
  • Thousands of Chinese fishing boats quietly form vast sea barriers. “China quietly mobilized thousands of fishing boats twice in recent weeks to form massive floating barriers of at least 200 miles long, showing a new level of coordination that could give Beijing more ways to impose control in contested seas.”
  • “Doing Is Living” highlights five decades of Ruth Asawa’s biomorphic wire sculptures. “‘I study nature and a lot of these forms come from observing plants,’ Asawa said in a 1995 interview. ‘I really look at nature, and I just do it as I see it. I draw something on paper. And then I am able to take a wire line and go into the air and define the air without stealing it from anyone.’”

Your Futures Thinking Observatory