AGI is here (and I feel fine) ⊗ A call for new stories

No.385 — BASAAP + hyperstition ⊗ Ancient everyday weirdness ⊗ The future isn’t fixed ⊗ Science, promise and peril in the age of AI ⊗ A year of clean energy milestones

AGI is here (and I feel fine) ⊗ A call for new stories

AGI is here (and I feel fine)

Robin Sloan argues that AGI has already arrived, though no one wants to admit it. He points to OpenAI’s 2020 paper on GPT-3 as the threshold moment—when researchers discovered that a single large language model could outperform specialised systems across multiple tasks. In his opinion, the resistance to declaring victory comes from both critics who don’t want to concede ground and industry players who keep moving the goalposts, waiting for something more dramatic to happen. Robin suggests this reluctance serves a purpose: it maintains the narrative that utopia is just around the corner, requiring only more funding and computing power.

This is why he proposes a “unilateral declaration [of AGI] as a strategic countermove” and asks “what now?” This acknowledgement recognises a genuine technical breakthrough while stripping away the millenarian hype around imminent transformation. The models have limitations—they can’t handle the physical world and struggle with novel problems that humans solve easily (see jaggedness)—but they possess what he calls “prodigious immediate generality.” Sloan draws a comparison to personal computers in the 1970s and 1980s, when grand visions were substantially realised but didn’t deliver utopia. Everyone now has access to something like AGI, just as everyone has a personal computer and internet connection.

From a slightly different angle, I’d argue this debate is funny because LLM are actually more general than intelligent. Like Robin, I’ll cite Jasmine Sun: “AI discovered wholly new proteins before it could count the ‘r’s in ‘strawberry’, which makes it neither vaporware nor a demigod but a secret third thing.”

To those who think the piece might be too positive about LLMs, I’ll remind you that one can be critical of all the pitfalls and misunderstandings, and be aware of the semantic traps, and still have their brain explode when working with LLMs. All these things are true. The biased training and permissionless taking of people’s work, the purposeful use of words to make it sound like it’s human/actually thinking, the extractive business models, the imperial attitude, the broligarchy, the misleading chat-focused interfaces, etc. But also the breadth of what they can do, the uncannyness of it, the power, the potential, the questions. It’s in part why I find the field so fascinating, but also, I think, why it’s so cleaving.

What do I mean by AGI ? Many competing definitions depend on words that themselves have competing definitions; these words include “valuable”, “work”, & “human”. […]

Language models were trained for a purpose, too … but, surprise: the mechanism & scale of that training did something new: opened a wormhole, through which a vast field of action & response could be reached. Towering libraries of human writing, drawn together across time & space, all the dumb reasons for it … that’s rich fuel, if you can hold it all in your head. […]

That’s all to say, for all the math & matériel involved in their care & feeding, the big models are more like Twitter than they are like jet engines, & this whole thing was a surprise anyway — from which no one has quite recovered — so I will defend vigorously the right of anybody/everybody to reflect & opine on AI’s properties & potential, & to declare, when it seems obvious: AGI is here.

  • Threads → Above I mentioned the use of words that make it sound like LLMs are actually thinking, which they aren’t. In We need to talk about how we talk about “AI”, Emily M. Bender and Nanna Inie write about exactly that, and how we need to use different words. Although they are correct, I think that ship has sailed, we can only nip at the edges in the hopes of helping people’s understanding, but there won’t be a large scale change until the inevitable moment where AI is just “software.”
  • In his talk at ThingsCon, Matt Jones shows a box of random cables, like so many of us have at home, and says “perhaps [I have] a brain like this as well. I have this drawer full of stuff which I keep because I know it’s going to be useful one day. And so this talk I hope is a little bit like this, hopefully it has some connections that you can use one day.” It does! Lots of great tidbits and thoughtful pieces in there. You can make shorter work of his write-up on his blog or have a more leisurely watch of the actual talk on YouTube. (Also, I have to say that the box of cable - brain analogy made me feel seen. 😂)
  • During the talk, Jones mentions the concept of hyperstition, “a self-fulfilling idea that becomes real through its own existence.” Somehow, in my archive I have articles all the way back to 2019 mentioning the word, but I’ve never started using it. Weird. The first mention was in Accelerationist Art, which I haven’t re-read, seems interesting, but also, regrettably, pro-accelerationism. More useful is Matt Webb’s post From the other side of the bridge. It’s also a nice circling back to Jones, since the two were colleagues. More importantly, I’m putting it here because it’s a great concept to have in the back of your mind as you read the next article below.

A call for new stories

This great piece by Paul Jun is history of design meets culture meets technology meets imagine the futures you want. He examines why the mid-century modern aesthetic succeeded in America while current attempts to define a “new aesthetic” feel hollow (to me the new aesthetic will always be the Bridle one, which was observed, not fabricated). Jun traces how modernist design spread through a specific pipeline: Bauhaus émigrés like Walter Gropius rebuilt Harvard’s architecture curriculum in the late 1930s, trained a generation of architects who formed firms like The Architects Collaborative, developed suburban communities like Six Moon Hill in Lexington, Massachusetts, and opened retail outlets like Design Research that made modern furniture available to ordinary consumers.

The author argues that aesthetics follow stories, not the other way round. Kennedy’s moon mission gave America a frontier narrative that pulled funding and attention toward space technology. NASA’s demand for integrated circuits—consuming 60% of American chip production by 1963—created the conditions for Silicon Valley to exist. The Apollo program didn’t just reach the moon. It built the semiconductor industry. He insists that calls for a “new aesthetic” miss the point: we need a compelling story about the future first, one that organises money, labour, and imagination. Without that narrative structure, “the call,” funding artists to create new forms produces decoration rather than a movement that reshapes how people live.

Bauhaus is [Stephanie Wakefield’s] model for how new form becomes possible: not by declaring a style, but by rebuilding the conditions for form through a shared experimental milieu that retrains perception and remakes the categories of thought and making. […]

The Industrial Revolution was rewriting the world: mass production, soot, speed, repeatability. The machine wanted everything to become straight lines, right angles, and cheap sameness. Art Nouveau answered with vines, hair, insects, vibrant colors, and smoke. It insisted that a human life should still feel alive, even as the economy became mechanical. […]

Maximum care. Maximum accountability. Maximum meaning. Maximum progress. Maximum support for children, parents, education, and clean water. Objects that don’t just perform, but belong. Buildings that don’t just impress, but hold us. Interfaces that don’t just convert, but teach people to see again. A future that doesn’t look like a spaceship, but feels like a place worth embodying.

More → Since Jun speaks of aesthetics, futures, and art nouveau as refusal, I’d be remiss not to mention solarpunk, which checks all those boxes. Read On the political dimensions of solarpunk by ADH and Solarpunk: A container for more fertile futures by Jay Springett, to name but two.


§ Ancient everyday weirdness. It’s a fifty-eight minute read about ancient and weird Every Day Carry, so I’m not going to read it thoroughly (yet?) to summarise here. But it’s from Bruce Sterling, who’s good at rabbit holing and the piece is fun to browse through.


Futures, Fictions & Fabulations

  • The future isn’t fixed. Who gets to imagine it matters. “Who gets to imagine our collective futures? And whose visions shape the world we're building? This report shares what we learned about why we must democratize the imagination of the future from seven years of funding futuring practitioners—artists, organizers, social workers, youth leaders, and visionaries—who are expanding who gets to imagine what's possible.”
  • Future Making: Imagining and Crafting Futures in a Diverse World. “This special issue explores future making, broadly defined as a set of practices for imagining and realizing a state of things that does not exist yet. Given the interdisciplinary nature of future making, the special issue features contributions from across various disciplines, such as design studies, organization studies, and future studies.”
  • The signals we’re watching in 2026. Nesta’s “annual series about the trends and developments that are set to shape the coming year.” Sand crime, AI execs, crowd-sourced bus routes, and data unions, to name a few.

Algorithms, Automations & Augmentations

  • Science, promise and peril in the age of AI. Special series at Quanta. So. Many. Things. To. Read!! “It has changed everything, from how we relate to data and truth, to how researchers devise experiments and mathematicians think about proofs. In this special series, we explore how AI is changing what it means to do science and math, and what it means to be a scientist.”
  • 8 plots that explain the state of open models. Nathan Lambert “measuring the impact of Qwen, DeepSeek, Llama, GPT-OSS, Nemotron, and all of the new entrants to the ecosystem.”
  • AI’s wrong answers are bad. Its wrong reasoning is worse. “New research suggests that part of the problem is that these models reason in fundamentally different ways than humans do, which can cause them to come unglued on more nuanced problems.”

Built, Biosphere & Breakthroughs

  • A year of clean energy milestones. “‘Solar is no longer just cheap daytime electricity,’ said Kostantsa Rangelova, analyst at energy think tank Ember. ‘Solar is now anytime dispatchable electricity.’”
  • Managing climate risk through climate adaptation. “With sufficient resources and guidance, individuals, organizations, and governments can use climate adaptation approaches to reduce bad outcomes and preserve the things we care about.”
  • ‘The right to wind in your hair’. “Tens of thousands of Cycling Without Age volunteers help combat loneliness with the simple act of a bike ride.”

Asides

Your Futures Thinking Observatory