The great erosion ⊗ Future of thinking about the future ⊗ On Sora

No.374 — Bubble, bubble, toil and trouble ⊗ Invest in your expeditionary teams ⊗ Young people in China are embracing AI therapy ⊗ The Evolving Doughnut

The great erosion ⊗ Future of thinking about the future ⊗ On Sora
Photo by Guillaume Joseph on Unsplash.

The great erosion

Zoe Scaman with a great piece on AI at work and a couple of dangers I’ve been pondering myself. She warns that while AI promises to augment human creativity and productivity, there is a real risk of cognitive atrophy as people outsource their most novel and challenging thinking to machines. Scaman emphasises the need for deliberate, careful integration of AI, with clear boundaries to prevent the erosion of foundational skills, especially for junior talent who might never develop essential creative abilities. Early-stage atrophy can feel like increased productivity, but it quietly undermines the deep expertise and originality that define creative industries. To safeguard long-term capability, organisations should implement “struggle quotas” by deliberately working without AI on some tasks, ensuring that cognitive muscles remain strong and creative thinking thrives.

One thing I partly disagree with, or think we can take from another angle, is the part where she talks about having “clear guidelines that distinguish augmentation from abdication.” To me, atrophy is when you lose a skill in whole or in part, abdication is where you leave to someone/thing else. Part of what we do with AI blunts our skills but part is outsourcing tasks to an algorithm instead of a person. One is something to worry about, the other might be a different way of working (and worry about, but on a human level, not your own skill level).

When they get a promotion, art directors don’t use Photoshop (or whatever) as much as when they were doing the work, they’ve abdicated execution. Does that mean they are now un-creative and un-employable? No, they’ve switched to a more specific set of skills. The editor of a magazine might not have written a piece in years, that doesn’t mean they don’t have any value. In other words, some AI usage we have to consider vs what kind of things we want to remain good at, while some other has to do with zeroing in on what you are best at at would like to keep doing.

And one of the big things I’m really starting to worry about is talent. Specifically, the tightrope walk between cognitive augmentation and cognitive atrophy. Because I’m watching this play out in real time, and I’m not sure we’re getting the balance right. Actually, I’m fairly certain we’re not. […]

But what we’re not talking about, what we’re not worrying about nearly enough, is the other side of the coin. If we’re running at speed at this, if we’re pushing everyone to use AI because we think it’s some sort of panacea to every business challenge we have, what about atrophy? What about the erosion of expertise, of knowledge, of critical thinking, of the kind of deep collaborative thinking that actually produces breakthrough work? […]

There’s a core tension here that we’re getting completely backwards. AI should amplify the twenty percent of your thinking that’s genuinely novel, freeing you from the eighty percent that’s repetitive. The stuff that doesn’t require original thought. The formatting, the restructuring, the administrative overhead. But in practice, I’m watching people outsource the twenty percent, the hard conceptual work, and keep the eighty percent, the execution and formatting. […]

For advertising and creative industries, this risk is especially acute, and it terrifies me. The thing being atrophied is creative thinking, conceptual originality, the ability to make unexpected connections.

Future of thinking about the future

Nick Foster, who’s been on an intense media tour for his book Could Should Might Don’t, was interviewed by Lisa Gralnek on Future of XYZ. All through the discussion, Nick emphasises that there is a widespread lack of rigour when thinking about the future, where uncertain projections often become accepted as facts despite being based on stories, opinions, or guesses. He argues that many futurists produce work that lacks detail and honesty about the inherent uncertainties, which undermines the value of foresight. Foster also mentions how the process of design can be structured and researched, but as soon as the team switches to “the future,” a blurriness and a lack of rigour are often par for the course. The work beyond that dotted line should be as informed as the rest of it.

The present’s volatility makes forming clear visions difficult, so it is crucial to approach future thinking with humility and care, acknowledging what we just don’t know. Foster encourages more structured, responsible, and nuanced conversations about the future, which can help us better understand and address the long-term consequences of our actions.

I think there’s a lot of tolerance for a lack of rigour when we’re talking about the future across the board, from designers and creative people through to more sort of investors, financiers, venture people, business leaders, strategists, everybody has a really visible weakness when it comes to talking about the future, that I think we really need to start addressing. […]

William Gibson does have a really lovely quote that I’ll butcher now. It’s very difficult to form detailed visions of the future because the present is too volatile. And he uses the phrase “we have insufficient now to stand on” which I really love as a way of thinking. […]

There is an argument to say that structured thinking about the future is harder and harder and less and less useful. I disagree with that, I think that actually what we’re all doing today is we’re all living within the consequences of insufficient foresight and thinking about the future from our predecessors, all those grainy people in those blurry black and white photographs who accidentally started things in motion that we’re now sort of fixing, living with, trying to work around, trying to solve. […]

[Lisa Gralnek] It’s like people say they want change, but actually no one does. And so in a world where everything is changing, I find that people hold on tighter and tighter to what they believe, rather than interrogating what they don’t know.

On Sora

I was (am) annoyed by the launch of Sora so I wrote something on LinkedIn: There are international standards for thread pitch in making screws, for shipping containers, for date and time formatting, for paper sizes. Laws in various countries for who can call champagne champagne, fire retardent in sofa fabric, castrating bulls, building and safety code, chemicals labelling, and more.

Yet OpenAI can launch Sora, an app that is 100% sure to continue the job of rotting our brains, destroying our kids’ self-esteem, vapourising trust in what is fake or real. An app that will be fertile ground for bullying and intimidation. An app and its supporting servers that will use gigantic amounts of energy and water, will pollute the air around power plants, and generally completely waste the resources we should be applying to reducing existential climate risk.

They have to respect the Apple store rules and the GDPR. Other than that? Not much. Are we completely insane? How can apps with such a trivial use case and yet deadly collateral effects be thrust on the public with no oversight, no reflexion on societal impacts, no prioritising of expanded resources, and virtually no restraints? In most large cities cafes need a permit to install an outside terrace, but Sora and others just do what the hell they want. Madness.


§ Bubble, bubble, toil and trouble. Jon Evans with a great take on the AI bubble hubbub. I kind of extracted the best bit but it’s not the most parseable quote on its own, read the whole thing. “(1) AI is a once-in-a-species tech that will utterly transform the world by 2030. (2) AI is an important but normal technology. (3) AI is useless and counterproductive, the technological equivalent of asbestos. … My own view meshes pretty well with Epoch AI’s recent report on AI in 2030; call it a 1.5ish on the scale above. (Maybe more like 1.66 but who’s counting.) If so, then several hundred billion dollars is a completely reasonable investment! The world already spends at least $1 trillion/year on software, so if AI merely doubles the efficiency of software development — which seems at least plausible, we’re not there yet but we’re getting there, vibecoding has changed a lot (for the better) just over the last several months — then AI is a $500B/year technology on that alone.”


Futures, Fictions & Fabulations

  • Invest in your expeditionary teams. “And this is the kind of practice organizations need now. Not more slide decks, not more reports, not sterile personas. What's required are artifacts — what I call functional fictions. Tangible probes built quickly, in two-week or four-week sprints, to make possible futures something you can see, touch, or use. … Efficiency won't help you here. What's needed is a capacity to explore — to send small, expeditionary teams out into this new terrain with the mandate to build functional fictions and bring back what they find.
  • Tech Trends report 2026. “The technological developments in this report take place within a broader context of social, technological, economic, environmental and political influences. These drivers, from ageing and digitalisation to climate change, inequality and geopolitical tensions, shape the playing field in which education and research must set their course.”
  • Designing with Futures: Research on the purposes and practices of engaging with the future in strategic design. “In this thesis, I draw from anticipation and Futures Literacy (FL) theories and the typology of Six Foresight Frames. Anticipation and FL studies examine anticipation and how to diversify it, while Six Foresight Frames serves as the main research framework for evaluating anticipatory purposes and practices in design. Using thematic analysis, I interpret the ways of anticipating in strategic design practice, presented through four main findings.”

Algorithms, Automations & Augmentations

  • This week all three links are from Rest of World’s coverage of AI in China. It’s not a placement or sponsorship of any kind, just a great issue of their newsletter.
  • Young people in China are embracing AI therapy. “Cheap, accessible, and friendly AI tools can augment scarce professional help, but there are risks to overreliance on the technology.”
  • AI is reshaping childhood in China, from robot tutors to chatbots. “China’s push to integrate AI into children’s lives has created a huge business opportunity for companies. Parents say AI tools are better — and less expensive — than human teachers and tutors. Experts warn that use of untested AI tools could harm children’s development and widen inequalities.”
  • China’s ghost city Ordos turns into a hub for autonomous vehicles. “Ordos has become a testing ground for self-driving vehicles. Autonomous trucks now haul coal through its empty streets. The desolation makes it safe for testing but useless for perfecting real-world AI.”

Built, Biosphere & Breakthroughs

  • The Evolving Doughnut. More red, always more red. “This report by Kate Raworth sets out where inspiration for the framework came from, and how and why it has evolved over its first three iterations. Following this, the paper presents tables showing the dimensions, indicators and data used for each of those three versions.”
  • ‘Super big deal’: High seas treaty reaches enough ratifications to become law. “The agreement on marine biodiversity of areas beyond national jurisdiction, also known as the high seas treaty, was reached in 2023 with much fanfare in marine conservation circles, partly because it sets up a system for establishing marine protected areas (MPAs) in international waters.”
  • China’s oyster-inspired ‘bone glue’ bonds fractures, can replace metal in surgery. “The new glue can be injected directly into a fracture site to help speed up bone repair. It bonds bone fragments together in 2–3 minutes, even in blood-rich areas where most adhesives fail.”

Asides

Your Futures Thinking Observatory