Newsletter No.258 — Apr 02, 2023

To Shape the Future, We Must First Understand the Present ⊗ Society’s Technical Debt and Software’s Gutenberg Moment ⊗ We Don’t Need New Tech to Fight Climate Change

Want to understand the world & imagine better futures?

Also this week → Red Teaming improved GPT-4. Violet Teaming goes even further ⊗ AI and the American smile

To shape the future, we must first understand the present

A conversation with Johannes Kleske at Foresight Folk, a site I discovered thanks to that interview (seems brand new). Excellent discussion with a number of useful insights from Johannes, like how he ties generalism, transdisciplinarity, and foresight. For example, I ‘felt seen’ reading this quote, “I notice that the kind of people who feel comfortable in Foresight are the ones who previously thought they didn’t belong anywhere and who suddenly found this overarching bracket for themselves in which anything is possible and just about anything is allowed.”

I’ve linked to Kleske’s work before, especially for his focus on critical futures, a practice he explains quite well here, including a couple of examples of how work with clients might differ from “traditional Futurists’.” He quotes Fred Polak to explain critical futures, “we know that images of the future influence our decision-making, but are not present in our day-to-day perception. Once we become aware of this, we can decide which images of the future we allow to influence us and even shape them. This is the actual underlying idea of Critical Futures.” Pay special attention to the futures muscle workshop and the cows v fences metaphor.

I’m not sure where ‘real’ critical futurists would draw the line but I’d like to attach it with stories more broadly. In perhaps a surprising link, I feel like Timnit Gebru in this Mastodon thread is doing something adjacent to critical futures or critical stories. Reading the infamous letter for a six-month break on AI, Gebru underlines the ideologies of many signatories and shows how that taints the actual content of the letter. One’s reading of it is very changed by the understanding of the history behind their proposal.

The idea of a home discipline does not exist for me. I think that’s very helpful, because you don’t look at the future through one lens, but rather broaden your horizon by applying different lenses. For me, this is one of the most critical skills and the goal of Futures Studies and Foresight: to expand the realm of possibilities. […]

[W]hen it became clear that [waterfall] was very inflexible and not fit for a volatile world, agile methods became popular. Then, however, people realised that although they could act much faster, they lacked something to guide and ground their decision-making. In the search for something they could work towards and that would guide their actions, more focus was placed on the future to avoid being driven only by the present. The desire was to act, not just react. This was the decisive change: the sudden new openness to engage with the future. […]

In Critical Futures Studies, we distinguish between two different futures states. The Future Present is a point in time in the future that will eventually become the present. In contrast, Present Futures deal with what’s in our minds in the here and now regarding our wishes, hopes, expectations and assumptions about the future. […]

In my keynote I have a slide with the quote: “I don’t have time to build fences, I have cows to catch.“ One of the things I’m thinking about is how can we help leaders spend more time building fences? So they can react less and act more. Shape the future instead of catching up or putting out fires. Unfortunately, the system is not built this way.

Futures, foresights, forecasts & fabulations → Red Teaming improved GPT-4. Violet Teaming goes even further. “Reducing harmful outputs isn't enough. AI companies must also invest in tools that can defend our institutions against the risks of their systems.”Speculative futuring meets AI, experiences from an AI-enabled co-futuring workshop. Short post with not that much detail but good idea to try. ⊗ A while back I linked to Worldbuilding Pt.1 at Dirt but the old link was gone, here’s the new place to find it and the follow-up Worldbuilding, Pt. 2.

Society’s technical debt and software’s Gutenberg moment

In a similar fashion (see article above if you haven’t), it’s quite important to keep the writers’ background in mind when reading this piece. Paul Kedrosky and Eric Norlin are VCs. Whether they are correct, partially correct, or wrong doesn’t completely depend on their jobs and histories, but it’s definitely very influenced by them.

Keeping in mind those priors, it’s still an excellent piece with an angle I hadn’t read before. Briefly; programming languages are very strict languages and thus quite a great playground for AIs, which means that the next big technological drop in price, after CPUs, storage, and bandwidth, might be software itself. They propose that the development of software has been expensive, resulting in a society-level technical debt. A lot of things could have been “eaten by software” but weren’t, now with AI writing software (or assisting greatly its writing) those prices will fall dramatically.

One thing that nagged me in their argument is that if AI is so great at the grammatical domain, how come it’s also getting great at art? Those two seem to be at odds yet both are being disrupted at the same time. Weird that it wasn’t mentioned.

Two other thoughts; if AI is so good at programming, does it partially explain why their developers are often freaking out and talking about AGI? Something like “it can already do my job it must be brilliant and on its way to godhood.” Another ‘the underlying story affects the perception’ angle; this piece saying that the problem with AI is the problem with capitalism. Much of our fears are really because AIs are produced by rabid capitalists, automation wouldn’t be as scary if it was produced for a common good instead of to extract profits and in the process wipe out livelihoods.

Final thought, if the authors are right, then this new exponential race for performance will be even faster than hardware’s over the last few decades. What is Moore’s (RIP) law but for AI models?
(Article via my friend Sébastien Provencher.)

So where will that come from? We think this augmenting automation boom will come from the same place as prior ones: from a price collapse in something while related productivity and performance soar. And that something is software itself. […]

This framing—grammar vs predictability—leaves us convinced that for the first time in the history of the software industry, tools have emerged that will radically alter the way we produce software. This isn’t about making it easier to debug, or test, or build, or share—even if those will change too—but about the very idea of what it means to manipulate the symbols that constitute a programming language. […]

What if the cost of producing software is set to collapse, for all the reasons we have discussed, and despite the internal Baumol-ian cost disease that was holding costs high? It could happen very quickly, faster than prior generations, given how quickly LLMs will evolve. […]

It is an exaggeration, but only a modest one, to say that it is a kind of Gutenberg moment, one where previous barriers to creation—scholarly, creative, economic, etc—are going to fall away, as people are freed to do things only limited by their imagination, or, more practically, by the old costs of producing software.

We don’t need new tech to fight climate change

Good piece by Paris Marx on the stories and envisioned futures (see what I did there?) put forward by tech moguls and tech industries in general vs what can be done against global warming now without any need for new technologies. Policies enacted without listening to fossil and tech lobbies could cover everything we need to do, no need for carbon capture or new terra-something reactors by Bill Gates.

On the other hand says I, it’s not one or the other, it’s both but with the right balance. A huge chunk of our limited success so far in lowering the ceiling on how bad things can get has do to the gigantic drop in prices on solar, wind, and batteries. That wasn’t brought about by innovation out of the blue or the invisible hand of the market, it’s due to decades of policies, funding basic science, and industries’ production scaling thanks to learning curves.

On climate change, Gates takes a similar approach to his vaccine work. His focus is on empowering entrepreneurs and investors, and getting governments to fund more basic research that wouldn’t be profitable for the private sector. […]

In the European Union, which was the first to make a major effort at implementing carbon trading, 85% of offset projects studied in a 2016 report were found to have a “low likelihood” of real emissions reductions. […]

The Summary for Policymakers of the IPCC’s latest report even warns that “Implementation of CCS currently faces technological, economic, institutional, ecological-environmental and socio-cultural barriers. Currently, global rates of CCS deployment are far below those in modelled pathways limiting global warming to 1.5°C to 2°C.” In short, CCS won’t save us.

Algorithms, Automation, Augmentation → AI and the American smile. How AI misrepresents culture through a facial expression. Fascinating. But also, you’re asking for something that never existed, of course it’s inaccurate. ⊗ I tried 200 AI tools, these are the best. Dizzying! ⊗ Mozilla launches a new startup focused on ‘trustworthy’ AI. Love the mission, can’t say I’m expecting much from them. ⊗ The swagged-out pope is an AI fake — and an early glimpse of a new reality. “Images of the pope wearing a white puffy jacket have gone viral online. They’re AI-generated but show how difficult it will be to distinguish fakes from reality in the future.”