The subprime AI crisis ⊗ The next great divide ⊗ Promiscuous pedagogies
No.325 — To understand Mississippi with a 480-year-old map ⊗ Once Upon A Future ⊗ Computational reproducibility
I went a bit long on my commentary of the first two features, so to keep this at any kind of a reasonable length, there are only two main articles this week.
The subprime AI crisis
Tl;DR: There’s a bubble but there’s AI life after it, no need to look at everything in the worse possible light.
If you follow sports (or US elections I guess), you might have noticed that the exact same game will be seen completely differently depending on which team they are rooting for. Reading this piece by Ed Zitron felt kind of the same, great fact gathering and great analysis, but viewed from ‘a life-long hater of the team’ and someone who seems to care about those businesses and the market. I’m not a fan of the team either—i.e. the big tech AI hype merry-go-round, but also don’t care much about the market, so I like to think that I can look at their work as somewhat of a neutral observer. Or at least, unphazed by the investors’ prospects.
All of that to say that Zitron produced a long read with loads of links and details on why the AI bubble is about to pop, what they (largely OpenAI) are doing wrong, how bad their financial situation is, the crazy deals investors seems to have signed, when the money will run out, and the ginormous gap between revenu and expenses.
I think there are two problems of expectations and a few problems of speed. OpenAI is raging ahead trying to build an AGI, not a business with its current products. What would it look like if it stopped trying to train a huge model every year at incredible expense and little revenu? I’m not saying they’ll do it, but saying a company fixated on AGI proves it’s impossible to make money right now is, in my opinion, not the right argument. The expectations of Altman and his investors are of building a god, the expectations of some observers are not as out of whack but also too elevated. That doesn’t mean there’s nothing to be done in a different form and non-god expectations.
Zitron says “absolutely nothing about the generative AI hype cycle has truly lived up to the term ‘artificial intelligence.’” True, but are we trying to determine if something useful is at the core of that bubble, or are we hoping for some hot Turing action before the end of the year? LLMs can be useful even while true AI doesn’t exist. He also says that “We’re not ‘in the early days’” and then gives investment numbers since … 2022. That’s definitely early days, even if the money invested is insane.
In terms of speed, I’ve mentioned this before, I think the speed of expenses (including huge new models every year), the speed at which people are trying out AI, the speed at which consumers are adopting it and finding value, and the speed at which companies are adopting it and finding value are misaligned. Are they going to align? I don’t know, but the fact that corporate clients have problems finding value with ChatGPT, again, does not mean it’s not possible to make a business out of AI.
In the end, I do agree with Zitron’s diagnosis of a coming bursting bubble, but I do believe there’s a business there for other, less singularity-focused companies to have healthy business models (Midjourney to name one). I also think that even if OpenAI, Microsoft, Google, and Anthropic stopped investing or folded, there are enough open source models and enough work already done for years of trying stuff and figuring things out. And that’s without even talking about the smaller models which are proving increasingly effective and have benefitted greatly from the current pedal to the metal investing in big models. The Pets dot com of web 1.0 died but the early web 2.0 saw a flourishing of great imaginative small companies, there’s plenty of use for AI beyond the figureheads. I’m looking forward to the Flickrs and Wikipedias of post-Altman days.
This means that OpenAI would likely be burning more like $6 billion a year on server costs if it wasn’t so deeply wedded to Microsoft — and that’s before you get into costs like staffing ($1.5 billion a year), and, as we‘ve discussed, training costs that are currently $3 billion for the year and will almost certainly increase. […]
What we have here is a shared delusion — a dead-end technology that runs on copyright theft, one that requires a continual supply of capital to keep running as it provides services that are, at best, inessential, sold to us dressed up as a kind of automation it doesn’t provide, that costs billions and billions of dollars and will continue to do so in perpetuity. Generative AI doesn’t run on money (or cloud credits), so much as it does on faith. The problem is that faith — like investor capital — is a finite resource. […]
It’s time to stop and take stock of the fact that we’re in the midst of the third delusional epoch of the tech industry. Yet, unlike cryptocurrency and the metaverse, everybody has joined the party and decided to burn money pursuing an unsustainable, unreliable, unprofitable, and environmentally-destructive boondoggle sold to customers and businesses as “artificial intelligence” that will “automate everything” without it ever having a path to do so. […]
Everything is about further monetization — about increasing the dollar-per-head value of each customer, be it through keeping them doing stuff on the platform to show them more advertising, upselling them new features that are only kind of useful, or creating some new monopoly or oligopoly where only those with the massive war chests of big tech can really play — and very little is about delivering real value. Real utility.
The next great divide
Considering the first one above, I guess this is “post by someone I respect but they are missing something” week. It’s the second time David Mattin tackles what he calls a great divide, the first was Creatures and machines. In both, he’s talking about “accelerate vs deccelerate” and its final form of “augmented vs organic.”
I think he’s correct that at some point there will be such a divide, and that it will be something that will need to be lived with and will include a high potential for conflict. But I have two issues with his characterisation. First, that he talks about the singularity and Kurzweil with no irony and no mention of any other aspects of TESCREALianism, a quasi-religious strain that is problematic right now, not in some distant future. (See some of the same ‘god-building’ people in the piece above.)
But I want to focus on the other part, where he equates people showing technological restraint to a caricature of the degrowth movement. Jason Hickel, to name one, might go far in some of his thinking but does not just want to restrain tech, he offers a solid vision of how we could degrow or current neoliberal capitalist extremism. There are others. I’ll leave this one for another time.
The bigger problem I have is that just putting every non-accelerator in one degrowth bucket and branding them (us) as anti-tech is inaccurate. There is—and I have linked to numerous articles in the past—a perfectly reasonable way to appreciate technology, be for thoughtful progress, appreciate what it has brought society, and yet realise that ‘we’ (the west mainly, and increasingly others) are grossly overusing what the planet can provide, and putting in peril our living conditions—not our lifestyle, although that to, but even the conditions supporting human life. This realisation can, should, perfectly logically make us decide to slow the f down.
Mattin is correct, there will be a divide but, it will be between two sets of extremes, early trans-humans and back-to-the-land people, but there will be a vast middle that will be struggling with the jackpot and/or struggling with a global refactoring of our economy and society. It’s good to worry about his divide, but it will not be the most dire thing to deal with, and not the one that implicates the most people.
Ultimately, this polarisation will revolve around those who repudiate the technologies now taking shape around us, and those who would not only use intelligent machines but become one with them. Become, as they will see it, something more than human. […]
Once brain-machine interfaces of this kind become safe and affordable, everyone will face a choice. To augment my brain, or not? To become one with the machines, and with silicon-based superintelligence, or to remain an entirely organic human creature? […]
If we’re going to navigate the coming Exponential Age, we urgently need to evolve towards a new system. We must build a post-liberalism founded in the need to accommodate not differences in religious belief, but differences in outlook when it comes to the fundamental shaping force in our societies now, which is technology.
§ Promiscuous pedagogies: doing interdisciplinarity with Marina Otero Verzier and Shannon Mattern. I was going to feature this one but it’s probably a bit too niched for many readers. However, if you’re interested in academia, in leaving academia, in understanding it, in transdisciplinarity and how to actually build bridges, and/or just enjoy any generalist-adjacent discourse (that would be me), this is a good read.
§ To understand Mississippi, I went to Spain. Wright Thompson goes to see a 480-year-old Spanish map that shaped his “home state’s violent history.” “The dominant urge of the Industrial Revolution was to fill every space on every map with people who could extract resources and multiply wealth.”
§ Cybernetic countercultures intensive online course. “Join Dr. Bruno Clarke and Dr. David McConville of Gaian Systems for a 12-week online course exploring the history of cybernetics & how ecological ideas inform today’s adaptive systems. Monday, 9/30 - 12/16.” Readers of Sentiers get 25% off, whether it’s to attend live online or for audit access. Because of a timing conflict I’ll be using the latter, looks like it will be fascinating!
Futures, Fictions & Fabulations
- Once Upon A Future. “You’re invited to zoom into the future and explore New York 50 years from now. Take a look at a world that serves us and our planet simultaneously. A society that does more than move money around but delivers good lives to us all. In this Giant Leap scenario, New York City embraces its freewheeling energy, innovation and optimism to become a model of urban climate resilience.”
- Mindful and emotional interfaces: Imagining a future of human-machine interactions. “This speculative design project envisions a future where careful use of Brain-Computer Interfaces and AI redefines human-machine interaction within sustainable urban environments. This narrative offers a strategic lens for UX design and business development.”
- Tomorrow’s Wardrobe. “What is the future of fashion at a time of climate crisis? Discover the urgent research and innovation taking place to design a future for fashion that is both stylish and sustainable.”
Algorithms, Automations & Augmentations
- AI makes effective solar cells—and explains the results. “Organic light-harvesting molecules with a five-fold improvement in stability over their predecessors. Moreover, the new system can explain what makes these novel compounds more stable, to help scientists design better molecules in the future.”
- Lionsgate signs deal to train AI model on its movies and shows. “Vice chair Michael Burns described it as a path toward ‘creating capital-efficient content creation opportunities’ for the studio, which sees the technology as ‘a great tool for augmenting, enhancing and supplementing our current operations.’”
- Can AI automate computational reproducibility? “Today, we introduce a new benchmark for measuring how well AI can reproduce existing computational research. We also share how this project has changed our thinking about “general intelligence” and the potential economic impact of AI.”
Built, Biosphere & Breakthroughs
- Christopher Brown’s A Natural History of Empty Lots. Cory Doctorow reviews Brown’s book, the subtitle is already quite enticing; “Field notes from urban edgelands, back alleys and other wild places.”
- Pacific islands submit court proposal for recognition of ecocide as a crime. “Vanuatu, Fiji and Samoa have proposed a formal recognition by the court of the crime of ecocide, defined as ‘unlawful or wanton acts committed with knowledge that there is a substantial likelihood of severe and either widespread or long-term damage to the environment being caused by those acts’.”
- The US finally takes aim at truck bloat. About time! Also, good example to explain the concept of externalities. “The proposed rules come amid a deadly period for pedestrians in this country. Each year, cars kill roughly 40,000 Americans. But while automakers have become very good at protecting people inside of vehicles, they have essentially neglected the safety of people outside of them.”
Asides
- 😍 🏔️ 🇩🇪 🎥 📸 Very Wes Anderson. Flit through ‘miniature’ Bavarian Alps in this aerial tilt-shift exploration. “Daiber captures a unique, “miniaturized” view of the region through a technique that plays with focus and angle. Tilt alters the focal plane, and the shift effect changes an image’s perspective, making vast landscapes appear like models or toys.”
- 🤩 🧱 🇨🇳 AZL architects wraps twisting jinling art museum in 139,000 bricks. “The design challenge was significant, as the architects were tasked with creating a space that not only serves as an art museum but also encapsulates the rich urban history and cultural traditions of Nanjing.” (Via @minorstep who has a picture of the construction, not just elevations.)
- 😱 ☠️ 🧫 🤖 Biobots arise from the cells of dead organisms − pushing the boundaries of life, death and medicine. “In our recently published review, we describe how certain cells – when provided with nutrients, oxygen, bioelectricity or biochemical cues – have the capacity to transform into multicellular organisms with new functions after death.”
Let’s work together
Hi, I’m Patrick, the curator and writer of Sentiers. I pay attention to dozens of fields and thinkers to identify what’s changing, what matters, what crosses boundaries, as well as signals of possible futures. I assemble these observations to broaden perspectives, foster better understanding, enhance situational awareness, and provide strategic insight. In other words, I notice what’s useful in our complex world and report back. I call this practice a futures observatory.
This newsletter is only part of what I find and document. If you want a new and broader perspective on your field and its surroundings, I can assemble custom briefings, reports, internal or public newsletter, and work as a thought partner for leaders and their teams. Contact me to learn more or get started.