The next great transformation ⊗ AI and the futures of work

No.391 — Perhaps AI is the paperclip ⊗ The imagination curriculum ⊗ On algorithmic wage discrimination ⊗ Romania and the link between economic growth and high emissions ⊗ Origami to imagine emergency shelters

The next great transformation ⊗ AI and the futures of work
New Luddites. Created with Midjourney.

The next great transformation

Fascinating piece with which to consider the rise and impacts of AI. When asked about AI, I’ve often said that if it were controlled by states, or at least not by maximalist founders, and took into account what happens to people who lose jobs, that it would be an entirely other discussion than the “move fast and break people” we are currently seeing. Here Jeremy Shapiro provides the perfect lens to consider ways of balancing AI and society. He revisits and explains Karl Polanyi’s concept of the “double movement”—the historical pattern in which market expansion generates a social counterforce demanding protection—and his broader argument that markets must remain embedded in social and political institutions, not the other way around, to avoid destroying the societies that host them.

Writing in 1944, Polanyi traced how the nineteenth-century experiment with self-regulating markets—treating labour, land, and money as pure commodities—produced dislocation so severe it destabilised democracies and cleared the path for fascism. His argument was that when states fail to shield populations from the raw force of market disruption, societies reach for whatever protection remains available, however illiberal. The interwar gold standard illustrated this perfectly. Germany’s commitment to it transmitted the Depression directly into social life; when democratic governments chose austerity over people, political legitimacy collapsed. By contrast, the postwar welfare state rebuilt that legitimacy by cushioning dislocation and redistributing the gains of industrial capitalism broadly enough to sustain social peace.

Shapiro’s argument is that AI represents a structurally similar moment. The technology threatens to decouple productivity from labour—automating not only routine tasks but cognition, judgement, and career progression itself—while concentrating the gains to a small group of firms and regions. This is, in Polanyi’s terms, a disembedding shock: economic activity pulled out of the social institutions that have historically absorbed change. He looks at how the US, Europe, and China are approaching AI and how their strategies address, or not, or badly, Polanyi’s theories.

One of the article’s central points is that the familiar “AI race” framing—who builds the largest models, controls the most compute—entirely misses this dimension. Speed without social protection accelerates backlash; backlash erodes political capacity; and states with weakened legitimacy lose long contests regardless of their technological lead. The real competition, Shapiro concludes, is over which model of social embedding can integrate AI without tearing society apart.

In Polanyian terms, AI is beginning to disembed economic activity from the social institutions that have absorbed change, creating precisely the conditions under which political counter-movements emerge. Such risks are already visible in rising populism, political volatility, and declining trust in institutions across advanced economies. AI may well accelerate the technological and economic trends that are already straining the social fabric. […]

This constraint matters because it creates a structural mismatch. Markets are global, but social protection remains largely national. Firms can shift profits, relocate assets, or threaten exit while maintaining access to consumer markets. Governments seeking to tax AI rents or impose social obligations face an immediate credibility problem. Even well-designed domestic re-embedding strategies risk erosion if firms can arbitrage jurisdictions. […]

For AI governance, this implies a sobering conclusion. If comprehensive international coordination on taxation and social standards remains politically constrained, then re-embedding efforts may not have the tax base they need to redistribute the gains and provide social protection. Success will depend on a mix of partial and fragile coordination among like-minded states, some bloc-level rule-setting, and access-based enforcement mechanisms that link participation in large markets to compliance with social obligations such as taxation. […]

The goal remains social integration and protection from market volatility, but the mechanisms must extend beyond labor markets alone. Embedding must increasingly target income, rather than jobs; firms and platforms, rather than individual workers; and status and participation, rather than employment per se. […]

Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.

AI and the futures of work

Johannes Kleske, writing from a perspective of over fifteen years of tracking AI-and-work discourse, pushes back against the viral “everything is changing” articles that resurface with each new model release. His target is a recent post by AI entrepreneur Matt Shumer, which Kleske reads not as a forecast but as what he calls a “present future”—a story about today dressed up as a prediction about tomorrow. Shumer’s error, in Kleske’s view, is extrapolating a personal experience in one narrow field (coding) into a claim about all work. The same pattern appeared after AlphaGo in 2016, after ChatGPT in 2022, and before that in dozens of preceding cycles. Each time, the predictions failed to account for how complex work actually is. Kleske also invokes Jevons’ paradox—the 19th-century observation that greater efficiency in coal use led to more coal consumption, not less. A pattern that has repeated since—to explain why AI tools are making many people work harder, not less.

The second argument is about what this kind of hype does to people. Drawing on L.M. Sacasas’s “Borg Complex” concept, Johannes argues that FOMO-driven narratives trigger reactance: they push people away rather than drawing them in. The attention economy amplifies fear over useful information, and the result is a counter-movement forming against AI precisely when people might benefit from engaging with it. Kleske’s prescription is not to disengage but to approach AI as a normal technology—experimenting without the urgency, comparing it to 1999 and the internet rather than to an imminent takeover. The things that will actually matter, he writes, probably haven’t been built yet.

I don’t want [people to believe “resistance is futile.”] I think AI is changing things, but I want society to shape this transition according to its values. The question I keep asking is, how can we use the best of this technology but with the values we have as a society and the way we want to live in this world? How can people gain more agency in shaping the future, instead of having it dictated to them?

This circles back nicely to the Shapiro piece above. Individuals need to better understand the technology to regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around.

I’m only interested in present futures because you can learn a lot about the present from listening to stories about the future. Just as reading science fiction predicts very little about the future, it reveals what we project into it based on our current problems. […]

I’m convinced that AI is going to change work fundamentally in many places. But it’s going to take much longer, it’s going to be so much weirder, and it’s going to be so much more unexpected than today’s predictions suggest. […]

The intriguing question isn’t what AI can do. It’s what new kinds of work and value emerge once things shift.


§ Some days I feel very very tired. Like when within an hour I read that family deepfakes help people celebrate and grieve in India, that a judge had to scold Mark Zuckerberg’s team for wearing Meta glasses to their social media trial, or that some people are talking about using millions or even billions of LLM tokens a day without mentioning energy, or electricity once, or that WD and Seagate confirmed that hard drives for 2026 are sold out, because hyperscalers are outspending the rest of the world. Nicholas Carr might be right, perhaps AI is the paperclip.

Bostrom’s story [of the paperclip maximizer], I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?


Futures, Fictions & Fabulations

  • The imagination curriculum. Zoe Scaman’s “reading list for strategists who want to think dangerously.” Excellent, detailed sci-fi recommendations. “They’re not strategy books, but they’ve taught me more about thinking strategically than most of what’s on the business shelf. Because they do the thing we’ve forgotten how to do: question the frame. Follow an assumption past the threshold of what’s comfortable. Imagine that the whole thing could be organised differently.”
  • The future of data centres: how is the industry changing in the AI era? “The demands on the data centre industry are evolving rapidly – so must our understanding of the issues they face. A new series of reports explores the future of the sector, from technological performance, resource use, energy to data centres’ wider role in the community.”
  • Megatrends 2026. “Sitra’s new megatrend review outlines the overall picture of change, and the constraints and the opportunities relevant to Finnish society to offer support for decision-making. The report interprets megatrends from Finland’s perspective through four themes: people and culture, power and politics, nature and resources, and technology and the economy.”

Algorithms, Automations & Augmentations

  • On algorithmic wage discrimination. “Drawing on a multi-year, first-of-its-kind ethnographic study of organizing on-demand workers, this Article examines the historical rupture in wage calculation, coordination, and distribution arising from the logic of informational capitalism: the use of granular data to produce unpredictable, variable, and personalized hourly pay.”
  • AI could mark the end of young people learning on the job—with terrible results. “The arrangement meant that employers had affordable labour, while employees received training and a clear career path. Both sides benefited. But now that bargain is breaking down. AI is automating the grunt work – the repetitive, boring but essential tasks that juniors used to do and learn from. And the consequences are hitting both ends of the workforce. Young workers cannot get a foothold. Older workers are watching the talent pipeline run dry.”
  • Wikipedia volunteers spent years cataloging AI tells. Now there’s a plugin to avoid them. “Tech entrepreneur Siqi Chen released an open source plugin for Anthropic’s Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called “Humanizer,” the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways.”

Built, Biosphere & Breakthroughs

Asides

Your Futures Thinking Observatory