How to think about what’s possible for tomorrow ⊗ AI hybrids ⊗ Why not Mars

This week →{.caps} How to think about what’s possible for tomorrow ⊗ AI hybrids ⊗ Why not Mars ⊗ Everything is deeply intertwingled ⊗ Two ways to think about decline

A year ago →{.caps} A favourite in issue No.202 was Oh, 2022! by Charlie Stross.

How to think about what’s possible for tomorrow

Rose Eveleth has spent the past eight years making over 180 episodes of a fantastic podcast about the future called Flash Forward, which I’ve mentioned here a few times in the past. Lots and lots of great episodes. The link above is to the first part of a trio of articles Eveleth wrote for WIRED in which they reflect on futures that haven’t happened yet on how “we do, in fact, get a say, and we should seize that voice as much as we possibly can.” The articles focus in turn on “hopewashing,” how to live on the precipice of tomorrow, and the many metaphors of metamorphosis, exploring who influences the futures we consider possible, how we are good (or not) at predicting the future and understanding the importance of events, how to change instead of burning things down, and how ==“hope should be a place to start, not a feeling to marinate in. Not a warm bed, but the alarm that gets you out of it.”==

I often write and link to articles about the importance of imagining better futures, which usually focus on how to write and invent them, this series by Eveleth is good context, background, and inspiration for these reflections and inventions, as well as a great excuse to dive into the archives of the podcast.

[M]uch like we cannot let the work of building better futures be contingent on feeling hopeful, we can’t let corporations or those in power control the flow and definition of hope either. ==No company or politician can hand you hope. We have to build it in and among ourselves as a beginning, not as an end.== […]
How does one change the future? How do we get to the tomorrows we want and not the ones we don’t? And a core piece of that question has to do with the way in which insects melt themselves into goo. Must we fully dissolve ourselves and our world in order to get to the futures we want? Do we have to burn it all down, destroy it all, and rebuild from that melted space? Or can we change more gradually, more incrementally, more like the hermit crabs, upgrading slowly as we go? […]
==As Octavia Butler once said, “There’s no single answer that will solve all our future problems. There’s no magic bullet. Instead, there are thousands of answers—at least. You can be one of them if you choose to be.”==

AI hybrids

I barely listen to podcasts so it’s a bit weird to start the new year with two recommendations based on mainstays of my thin podcast diet. Ezra Klein interviewed Gary Marcus for a skeptical take on the AI revolution. It’s an excellent and wide ranging discussion, and I was especially drawn to the part on the “war” between the neural network and symbolic camps of AI research and development. Marcus argues for a more balanced financing of both, and for hybrids of the two approaches, for mixed solutions with distinct tools that work together.

And there’s this weird argument, weird discourse where people who like the neural network stuff mostly don’t want to use symbols. What I’ve been arguing for 30 years, since I did my dissertation with Steve Pinker at M.I.T. studying children’s language, has been for ==some kind of hybrid where we use neural networks for the things they’re good at, and use the symbol stuff for the things they’re good at, and try to find ways to bridge these two traditions.==

The day before listening to that interview, I was reading Stephen Wolfram’s essay proposing Wolfram|Alpha as the way to bring computational knowledge superpowers to ChatGPT. Here’s the basic ‘thesis’ of the essay:

For decades there’s been a dichotomy in thinking about AI between “statistical approaches” of the kind ChatGPT uses, and “symbolic approaches” that are in effect the starting point for Wolfram|Alpha. But now—thanks to the success of ChatGPT—as well as all the work we’ve done in making Wolfram|Alpha understand natural language—==there’s finally the opportunity to combine these to make something much stronger than either could ever achieve on their own.==

Remarkably similar to Marcus’, no? Wolfram goes on to give multiple examples of where ChatGPT gives great looking answers that are nonetheless complete bullshit, and then comparing that to Alpha answers based on application of the Wolfram computational language. His argument perfectly matches my experience so far with the chat AI, which I’ve written about before{.internal}: it’s remarkably adept at writing human-like replies, but doesn’t understand what it’s saying as much as the quality of the writing might lead us to believe.

Are the next steps a question of scaling evermore and perfecting the models? Or is it a questions of mixing statistical and symbolic tools? I’m definitely not in a position to give a strong argument for either, but Marcus and Wolfram make fascinating arguments for the latter.

Related → I was mostly off during the holidays but did some feed reading and noted a lot of articles about AI. I recommend Maggie Appleton’s The Expanding Dark Forest and Generative AI where she intersects the transition to smaller social networks with the coming flood of generated “content” that might make parts of the internet virtually uninhabitable. Yet to read but promising: The Fine Art of Promptingby Jon Evans ⊗ Enjoy Chatbots While They’re Free by David Karpf ⊗ An A.I. Pioneer on What We Should Really FearAI experts are increasingly afraid of what they’re creating.

Why not Mars

The excellent Maciej Cegłowski with a long, very annotated, and quite compelling essay (first in a series it seems) to persuade us “that we shouldn’t send human beings to Mars, at least not anytime soon.” I was already convinced, and I’m sure many readers here are, but it remains a great read for all the science bits integrated in his argument. The section on bacteria is especially enlightening and proves, once again, how much we have yet to earn about our own planet.

Sticking a flag in the Martian dust would cost something north of half a trillion dollars, with no realistic prospect of landing before 2050. To borrow a quote from John Young, keeping such a program funded through fifteen consecutive Congresses would require a series “of continuous miracles, interspersed with acts of God”. Like the Space Shuttle and Space Station before it, the Mars program would exist in a state of permanent redesign by budget committee until any logic or sense in the original proposal had been wrung out of it. […]
These new techniques confirmed that earth’s crust is inhabited to a depth of kilometers by a ‘deep biosphere’ of slow-living microbes nourished by geochemical processes and radioactive decay. […]
==One path forward would be to build on the technological revolution of the past fifty years and go explore the hell out of space with robots. This future is available to us right now. Simply redirecting the $11.6 billion budget for human space flight would be enough to staff up the Jet Propulsion Laboratory and go from launching one major project per decade to multiple planetary probes and telescopes a year. It would be the start of the greatest era of discovery in history.==

Everything is deeply intertwingled

Gemma Copeland on intertwingled thinking, by way of Ted Nelson, backlinks, digital gardens, Claire L. Evans and Christopher Alexander (citing a piece I’ve previously written about{.internal}), Jenny Odell, Ursula Le Guin, and many other people familiar to readers of this newsletter. I’m including it here for it’s own sake, but also because of my own questions on backlinks.

The redesign of the Sentiers archives a while back was supposed to deconstruct all the issues into a digital garden with backlinks. The first part worked out great but I never managed to get into the habit of short notes with lots of backlinking or found the reflex/time to write additional notes that don’t appear in the newsletter. I’ve never gotten much feedback or proof of use of this archive, which makes me wonder about it’s usefulness to readers. And so, ==I’d love to hear from you, do you/have you used the archive, tags, etc. Or perhaps you just use the web version of each issue? Or nothing at all? Please hit reply and help me decide on the way forward for the archive format and digital gardening.==

I think a digital garden full of bidirectional links is a kind of semilattice. The content can be collected, remixed and resurfaced in many different ways, appearing in lots of different sets according to the context. Working in this way requires a whole different approach to design. It’s complex and nonlinear, which can be challenging to get your head around compared to a tree website. Instead you have to understand it from the bottom-up, thinking in sets or patterns instead of trying to establish a top-down map or plan. […]
The task of hypertext is not to manufacture connections, but to discover where they have always been. ==Hypertext researchers before the World Wide Web built systems to support this endless, sacred hunt for entanglement and hidden structure, as inherent to thought as ecosystems are to the natural world.==

Two ways to think about decline

Big tech has been going through some turmoil, especially around companies’ valuation but also business models, how much employees they need, and what’s next for each of them. This issue of Tim Carmody’s Amazon Chronicles paints a very useful portrait of the ongoing transition of these companies.

In general, what characterizes this phase of the tech giants' development is a shift from unlocking user creativity and customer value to doubling down on surveillance, usually augmented by AI. Mass surveillance was always an important emergent part of the tech giants’ strategy, but was arguably secondary to delighting users and giving them greater capabilities. ==Now surveillance and nonhuman solutions are dominant, and the creative possibilities are now almost all residual.== […]
Instead of accelerating growth, we're seeing accelerated attempts to manage or ward off decline, where decline is much more narrowly construed as a loss of profits and revenue, rather than market share, user relevance, or technological innovation.

Futures, foresights, forecasts & fabulations

Futures, foresights, forecasts & fabulations →{.caps} Love this new initiative led by Superflux! Cascade inquiry “imagines future worlds where positive climate action has been taken.” ⊗ Excellent design fiction playlist. ⊗ Sci-fi author Judith Merril and the very real story of Toronto’s Spaced Out LibraryDreams of a Resilient Planet “puts forward three new punk genres to help think of a better tomorrow, following traditions of classics like Cyberpunk, and Solarpunk.”

Asides

{.miscellany}

Your Futures Thinking Observatory