Positive climate tipping points ⊗ The battle for AI’s future ⊗ The ‘Georgists’ are out there

No.290 — The Iceberg Model ⊗ The CRISPR era is here ⊗ Future of Space Exploration ⊗ Generative AI in the Enterprise

Positive climate tipping points ⊗ The battle for AI’s future ⊗ The ‘Georgists’ are out there
Gray and blue shoreline, Vik, Iceland. By Jeremy Bishop on Unsplash.

How positive climate tipping points could save the planet

There’s a certain calculus that goes on when I’m considering articles for this newsletter, trying to balance interest to me, estimated interest to readers, length of the read, variety of topics, alternance of topics and sources, etc. In that guesstimate of ‘value’ 26 minute reads like this one are a bit of a gamble, will it really carry enough weight to warrant the time I could spend on these four six or seven minute reads? Well this one by Katarina Zimmer at NOEMA definitely did.

The advantage of such a long read is that the author might, and pretty much did, cover all of the instances where I was thinking ‘yes but,’ or ‘sure, but you didn’t talk about this,’ and other lines of questioning. One might second guess the balance, but roughly everything you’d like in there is covered.

Researchers have identified “positive tipping points” in the global economy and society that, once achieved, could trigger a rapid transition away from emitting greenhouse gases. Examples of these tipping points include the adoption of electric vehicles in Norway and the phasing out of coal-generated power in the UK.

Technological change can occur quickly with policies that get green technologies over certain thresholds and thus turbocharge the green transition. There is potential for positive tipping points in sectors such as livestock farming, building heat sources, green hydrogen, and the protection of carbon-hoarding ecosystems.

I was not aware of the Breakthrough Agenda, which was put together by a number of researchers and was part of COP26. At the meeting, “45 countries committed to making green technologies the most attractive option in each high-emitting sector, from agriculture to steelmaking, by 2030.”

There are a number of caveats, well explained in the piece like social tipping points and the need for deeper change. An example of the former might be the current upward trajectory of the far right in Europe and the later could be things like building walkable cities where there’s less of a need for cars, or expanding train networks, or changing eating habits towards less meat instead of replacing it with fake or lab grown versions.

Great read, I recommend you check out the whole thing, and at the very least it’s a useful read from a systems thinking perspective and a hopeful / intriguing angle through which we can analyse the climate crisis and the work to be done.

But it might be more effective to break things up into sectors, identify the countries with the greatest capacity and willingness to make a difference in that sector and figure out how to unlock tipping points in each one. […]

Strong policies in specific sectors — mandating zero-emissions vehicles, requiring green ammonia for fertilizer production and introducing alternative proteins into public institutions like schools and hospitals — could have a much wider impact beyond those individual sectors. Activating these three “super-leverage points,” according to the report, “could trigger a cascade of tipping points for zero-carbon solutions” in various sectors that collectively represent 70% of global greenhouse gas emissions. […]

With battery demand set to rapidly grow many times over, there’s growing concern around exacerbating these calamities in regions that are already bearing the brunt of climate change impacts. […]

Some researchers worry that scientists, hoping to find reasons to believe that there is a fast way out of the climate crisis, may be using the tipping points concept too loosely, identifying ones without sufficient evidence; strictly speaking, tipping points must rapidly drive a system to substantially shift into a different state through self-reinforcing processes, like the Greenland glacier, and must be hard, if not impossible to reverse.

OpenAI’s identity crisis and the battle for AI’s future

There are thousands of ‘think pieces’ about OpenAI, I’m sharing this one by Azeem Azhar because a good part of it is focused on governance and what might be a good mix to potentially have greater success than the recent drama, and he offers a perspective I hadn’t seen before on AI safety; that there is actually a lot more work being done on the topic than for most other technologies in the past.

I’d counter that the speed at which SALAMIs are arriving might explain why it feels like there’s a lot more work being done, and that his perspective (and mine) on what was going on around these other techs might be wildly incomplete, which makes it look like they surged into society with no safety rails. I mean, how much do you know, off the top of your head, about security precautions around the arrival of the steam engine or plastics? Regardless, worth a read and a ponder.

With previous widely applicable technologies and services, the breadth of this debate was variable or non-existent. Consider Uber’s explosion into our cities. Go further back in history: there was virtually no public awareness or debate around electricity or the roll-out of the steam engine or plastics. […]

I have some simple principles that I like to apply: that concentration of power is unhelpful, that a diversity of actors is helpful, that monocultures are not resilient…. in short, that more players is often better than fewer players. […]

Now might be the time to take this episode to heart and find a transparent system of governance as advanced and sophisticated as the AI it seeks to oversee.

The ‘Georgists’ are out there, and they want to tax your land

I first heard about Georgism through a post by Doug Belshaw but hadn’t taken the time since to look deeper into it, the article above does a good job of it.

Conor Dougherty looks at the current Georgesque project (under another name) in Detroit, the history of the idea, and what it means to address.

“George’s argument was that since land derives most of its worth from its location and the surrounding community, that community, and not the owner, should realize most of the benefits when values rise.” And so, tax the value of the land instead of the value of what’s built on it, encourage using land to built stuff in cities instead of hoarding it until someone else does something and you reap the benefits.

It’s not in the article but I’d like to attach this idea to the commons and extraction. It’s not usually (or ever) how markets depict it, but ‘we’ are stewards of the planet, we should be fairly compensated when corporations want to extract from it—with the added benefit of discouraging some uses and carelessness. Commons should be fairly used, and wealth generated through collectively financed projects should not accrue value to just a few people. Lots of ‘shoulds’ in there, but the original Georgian idea presents an option for fixing one part of this kind of imbalance we see everywhere.

The fundamentalist version of Georgism, like the fundamentalist version of anything, is plainly unrealistic. But the broader Georgist framework is full of insights about urban economies and how to improve them. […]

They encourage housing development instead of discouraging it, she noted. They don’t discourage work or investment, like taxes on income and capital gains. They’re also hard to dodge, since land is hard to move.

§ Using the Iceberg Model to understand complex situations. Jorge Arango explains the Iceberg Model, a framework for understanding complex situations by diving deeper into the events, patterns, structures, and mental models that shape them. Using the recent turmoil at OpenAI as an example, he explains how focusing only on the surface-level events can lead to misunderstandings and how applying this framework can help us gain a deeper understanding of any complex system.

§ The CRISPR era is here. Scientists have made groundbreaking progress in gene editing with CRISPR technology, using it to treat sickle-cell disease and virtually eliminating its debilitating and deadly effects in 29 of 30 eligible patients. Multiple researchers in the field are already exploring more sophisticated versions of CRISPR that could lead to treating more complex diseases.

Futures, fictions & fabulations

Strategic Scenarios for European Space Exploration 2040-2060
“The study delineates four scenarios for the long-term future of space exploration, emphasizing the critical uncertainties and forces that will shape its trajectory and underscore its significance.”

Use GenAI to improve scenario planning
They present a bit of a weirdly limited view of scenarios in this piece, but still provide some good points on the various ways generative AI can be used in scenario creation, narrative exploration, and strategy generation.

Library Strategic Foresight Report
“The main purpose was to explore and illuminate possible future outcomes for libraries, including considering what a preferred future might look like. Informed by research and Library stakeholders, a preferred future vision can help inform the Library’s current strategic planning work.”

Algorithms, Automation, Augmentation

What happens when your AI girlfriend dies?
“Thousands of people have been ghosted by their AI girlfriends after the shutdown of virtual companion apps such as Forever Voices and Soulmate.”

Generative AI in the Enterprise
“We wanted to find out what people are actually doing, so in September we surveyed O’Reilly’s users. Our survey focused on how companies use generative AI, what bottlenecks they see in adoption, and what skills gaps need to be addressed.”

Institute for Technology and Humanity
“By integrating the Leverhulme Centre for the Future of Intelligence (CFI), the Centre for Human-Inspired AI (CHIA), and the Centre for the Study of Existential Risk (CSER), the new initiative will contain historians and philosophers as well as computer scientists and robotics experts.”


Your Futures Thinking Observatory