The government knows AGI is coming ⊗ AI’s productivity paradox ⊗ Thinking time

No.347 — The handoff to bots ⊗ AI for Future Cities ⊗ FLORA ⊗ Why ancient roman concrete was so durable

The government knows AGI is coming ⊗ AI’s productivity paradox ⊗ Thinking time
Grand Prismatic Spring, United States. Chris Leipelt on Unsplash.

Quick note to supporting members. I realised this week that there was an abnormally high number of failed transactions dating back a few months—which usually means an expired credit card. This probably happened because of the transition from one platform to the other last year, even though payments remained through Stripe.

If you think you are paying for a membership, please head over to the website, click “Sign in” at the top right, you’ll get a sign in link by email, then come back and click “Account” in the same corner. In the modal window, if you read “See plans,” that means you are not a members and it expired because of the credit card. Please check your Stripe account to update your card and start a new paid membership. Thanks!! (If you were on a “grandparented in” $50 tier and want to reclaim it, hit reply and I’ll send you a link.)

This is a long one, with more commentary than usual, buckle up!


The government knows AGI is coming

Ezra Klein interviewing Ben Buchanan, the former special adviser for artificial intelligence in the Biden White House. I believe he’s crediting the present administration with too much saneness, and he’s annoyingly pro America (sorry American friends), especially these days, where clearly the rest of the world can’t (if we ever really should have) believe that the US leading something is necessarily good. He refers to the latter part of the Kennedy moon speech:

Like nuclear science, all technology has no conscience of its own. Whether it will become a force for good or ill depends on man, and only if the United States occupies a position of preeminence can we help decide whether this new ocean will be a sea of peace or a new terrifying theater of war.

Believing it can still be is one thing, believing US AI supremacy is the right thing for the world right now is … dubious. (Btw, if you have an argument for how any place other than the US or China might reach some form of AI dominance before them, I’m all ears.)

Note → I’ve said a few times that I wished there was a scale on which to place people’s opinions about AI. From 1 to 10, bullshit to “god,” this conversation probably rates as a … 7(?) as the likely state of the field at some point during the Felon’s term.

That being said, the purely AI stuff is quite enlightening, he explains his ideas well and contextualises them more than many people on the topic. His “canonical” definition of AGI is another confirmation of what the acronym now means for the industry, i.e. not a sentient synthetic version of human brains but a Swiss army knife of knowledge tools wrapped in a natural language layer. “Transformative AI” comes up later on and is actually a better name for this next phase, imho.

A canonical definition of A.G.I. is a system capable of doing almost any cognitive task a human can do. I don’t know that we’ll quite see that in the next four years or so, but I do think we’ll see something like that, where the breadth of the system is remarkable but also its depth, its capacity to, in some cases, exceed human capabilities, regardless of the cognitive discipline.

They also discuss cybersecurity, how dominant AI can/could be used with the massive amounts of data already available, what it might mean as a strategic advantage against other countries; the labour market implications, although they are very vague on how that might turn out; surveillance, autocracy; the chip export controls; the Biden administration’s AI policies more broadly; what the accelerationists around the Felon think, and more besides.

Two things I’ll keep in mind. Klein offers this metaphor, which is a great angle from which to look at this race:

It’s like we are trying to build an alliance with another almost interplanetary ally, and we are in a competition with China to make that alliance. But we don’t understand the ally, and we don’t understand what it will mean to let that ally into all of our systems and all of our planning.

It’s no surprise I noticed that part, since in No.331 I shared this: The 3 AI use cases: Gods, interns, and cogs, where “Drew Breunig presents some useful buckets/categorisations for types of AI, although I dislike the use of ‘gods’ and would have gone with something like Aliens (or entities), Assistants, and Automations. “

Second, Klein raises the very valid and not oft discussed point that there is a lot of talk about AI “taking” jobs, perhaps creating some, of UBI and other potential “solutions,” but other than retraining, there aren’t all that many policy opinions about how the transition could take place. Buchanan doesn’t provide any better answers. Adjacent, is the question of how government bureaucracies can hope to adapt fast enough. Other than “they can’t, they are too slow,” no great opening to solutions here either.

(The Youtube version is linked above but it’s originally from the NYT and includes the transcript.)

We were focused on defense because we think AI could represent a fundamental shift in how we conduct cyber operations on offense and defense. I would not want to live in a world in which China has that capability on offense and defense, and the United States does not. […]

We are rushing toward A.G.I. without really understanding what that is or what that means. It seems potentially like a historically dangerous thing that A.I. has reached maturation at the exact moment that the U.S. and China are in this Thucydides’ trap-style race for superpower dominance.

AI’s productivity paradox: how it might unfold more slowly than we think

Interesting thought experiment here by Azeem Azhar, where he explores the potential for AI to disappoint in delivering significant productivity improvements within five years, despite its promise as a general-purpose technology. He argues that challenges such as uneven adoption across firms, particularly among large companies, and systemic constraints may hinder AI’s broader economic impact. While massive investments are pouring into AI, there’s a risk of over-investment leading to diminished returns, akin to past tech bubbles. Points / questions that came to mind while reading:

  • The whole experiment and much of the interview above can be thought of as the unknown speeds of different components and the relations between them. Speed of adoption in organisations (and consumers); speed of adoption vs bottlenecks (creating new drugs vs the “bottleneck” of testing and approving); speed (or relative slowness) at which revenues grow vs imperatives of return on investment.
  • Where are there no bottlenecks? Which industry or government office might operate with no bottlenecks? Not by removing them, just by how they already operate. Consultants or coders for example can conceivably be augmented by AI and deliver with no bottleneck.
  • “In the U.S., large firms with over 500 employees drive 70% of GDP.“ If he’s right that smaller enterprises will be fastest to true AI integration (they are more nimble, but do they have the budget?), which countries might benefit from a larger share of SMEs in their economy?
  • Azhar quotes Jim Chanos in a Paul Krugman piece: “So far, AI is doing the opposite [of the internet boom]. It is a massively capital-intensive business. Someone joked that the top tech companies are now looking like the oil frackers did in 2014, 2015, where more and more capital is chasing arguably a variable return. […] The numbers are now getting so large from just even a couple years ago that the returns on invested capital are really now beginning to turn down pretty hard for these companies.” AI frontier labs are fracking our data and face a similar predicament as real frackers.
  • Which of course reminded me of Jay Springett’s cultural fracking. In all three cases, they are running our of frackable stuff.

In the short term, the impacts of a GPT are marginal across an economy, although individual firms can do well. In the medium term, as the technology becomes more widespread, firms are able to retool around the technology and a plethora of upstream and downstream businesses emerge. This all drives growth. In the long term, a technology’s contribution to productivity growth flatlines as it becomes normalised. […]

Thirty years into the internet boom, the digital economy contributes 10% to US GDP. Expectations for AI are much, much higher. […]

But companies and industries need to absorb this intelligence. And the economy might have natural saturation points that limit uptake. It took companies, and then industry, two decades to change their processes to make use of electricity in the factories. Even the humble typewriter took roughly two decades to be absorbed into companies.

Thinking time

David Mattin discusses the concept of “thinking time,” inspired by OpenAI research scientist Noam Brown, on how giving AI models just 20 seconds more to “think” can significantly enhance their performance. He reflects on his own experience, noting that while he has been busy with numerous projects, true depth and originality in work often stem from moments of contemplation rather than constant activity. A redefining of productivity to include time for open thought, which might be crucial to generate impactful ideas in a rapidly evolving technological landscape.

But it’s pretty obvious, really, that busy is not the ultimate metric here. It’s not the end game. After all, sheer quantity of work is not the aim. Rather, I’m searching for depth. And originality. […]

What I’m questing after, instead, is an idea with power. That new framework that helps us see a relationship we’d previously missed. The essay that says something new about our experience in this moment. […]

[An idea with power] doesn’t come from busyness. It comes from open thought. From long walks. From space to breathe.

The handoff to bots

Kevin Kelly writing about how in the coming decades, global population is expected to decline significantly, creating a unique economic challenge as we transition from a labour force of biologically born individuals to a synthetic economy powered by artificial intelligence and robots. This shift, which he refers to as the “handoff to bots,” will require us to rethink how we operate without population growth, potentially redefining it as maturity and innovation rather than a mere increase in numbers. As fewer humans remain, our focus could shift towards creative, non-productive pursuits, while machines handle the majority of productive tasks.

I’d put a big caveat on this angle: “The economy of the Made, a synthetic economy, is powered by artificial minds, machine attention, synthetic labor, virtual needs, and manufactured desires.” But otherwise it’s true that the intersection of falling demographics, degrowth(s), and automation is one of the wicked problems we face. As the Klein interview highlights, we aren’t even considering enough options to deal with the clash of AI with work, never mind putting it in the context of falling population.

The capitalist system we have built around the globe thrives on growth. Progress has been keyed to growth of markets, growth of labor, growth of capital, growth of everything. However in the second half of this century, there will be no growth in humans. […]

Think of this as a handoff – a shift from one regime based on the biologically born to another based on the manufactured made. We are in transition from the world of the Born handing off to the world of the Made.

Side note → Kelly mentions that “since the year 2000, the official forecasts of what the world population would be in the near future have been incorrectly too high each year, because it has been hard to believe fertility rates could fall so fast and not bounce back.” Other forecasts that most people get wrong.

  • Installation rates and price of solar and batteries.
  • The “strength” and adoption of AI.
  • Various indicators of climate change and biodiversity loss. We keep hitting the worst side of forecast ranges or being surprised by the rate of change.

Sentiers is made possible by the generous support of its Members and
the modern family office of Pardon.

Futures, Fictions & Fabulations

  • AI for Future Cities: Urban Planning and Design “How are generative AI models changing the work and required competencies of practitioners and decision-makers? What does a city look like that has been shaped by AI? And how do we ensure that AI remains a tool we use intelligently?”
  • Open Call : Residency Ecological Futures 2025 “The residency provides a unique opportunity for visual artists and artistic researchers to work at the intersection of art, science, technology, and ecology. The aim is to foster art-driven interdisciplinary research and artistic creation that critically addresses and creatively explores ecological futures through digital and technological media.”

Algorithms, Automations & Augmentations

  • FLORA - True Creative AI. “There’s a smoothness to the canvas that’s reminiscent to the first time I used Figma. But instead of wrestling pixels to the ground, it brings me the curiosity I get when I massage Midjourney to my liking. I put my concern over credits to the side, and simply keep at it.” (Via thejaymo.)
  • LA Times to display AI-generated political rating on opinion pieces. “The new AI ‘Insights’ feature will only be applied to a range of opinion content in the paper, not its news reporting, according to a public letter announcing the change from Patrick Soon-Shiong, the medical entrepreneur who bought the Los Angeles Times in 2018.”
  • People are using Super Mario to benchmark AI now. “Hao AI Lab, a research org at the University of California San Diego, on Friday threw AI into live Super Mario Bros. games. Anthropic’s Claude 3.7 performed the best, followed by Claude 3.5. Google’s Gemini 1.5 Pro and OpenAI’s GPT-4o struggled.”

Asides

  • We finally know why ancient roman concrete was so durable “When cracks form in the concrete, they preferentially travel to the lime clasts, which have a higher surface area than other particles in the matrix. When water gets into the crack, it reacts with the lime to form a solution rich in calcium that dries and hardens as calcium carbonate, gluing the crack back together and preventing it from spreading further.
  • The lab resurrecting ancient proteins to unlock life’s secrets. “Dr. Betül Kaçar is leading a project to resurrect ancient enzymes to reveal insights about life’s origins, the possibility of life throughout the cosmos, and how to adapt to a changing climate. Kaçar’s latest project focuses on nitrogenase, the only enzyme that makes nitrogen available to life — a key to both Earth’s past and future.”
  • Firefly Aerospace’s Blue Ghost successfully lands on the moon. “The Blue Ghost lander is carrying equipment for testing for NASA. The experiments include ‘lunar subsurface drilling, sample collection, X-ray imaging, and dust mitigation experiments.’”

Your Futures Thinking Observatory