Newsletter No.250 — Jan 29, 2023

The Map Room Is a Physical Room-Size Wiki for Collaboration From the 1950s ⊗ Computers Enable Fantasies ⊗ Imagination Infrastructure — What Do We Mean?

Also this week → Accelerating transition by concentrating on positive tipping points ⊗ Who owns the generative AI platform? ⊗ Worlding with digital twins ⊗ Science Fiction as a Futurist Tool

Issue No.250! That’s a lot of reading, synthesising, summarising, and lots of ‘I reckon’ commentary. Milestones are good opportunities for reflection, which has lead to pondering some changes to the membership system and pondering a ‘lessons learned’ type of post but, alas, neither was finalised in time to make the cut. You’ll have to stick around a little longer to find out.

The gist of it; I’m happy and proud of these missives that land in your inboxes every Sunday. Thanks for reading!

The map room is a physical room-size wiki for collaboration from the 1950s

Fascinating piece by Matt Webb, exploring map rooms to enable collaboration, shared knowledge, sense-making, he gives a bunch of examples and how these rooms connect to the thinking of Engelbart and Licklider.

It’s great for the history and weird projects, but also for the last part on contemporary tools, I was especially nodding my head at this bit about Google Docs being ephemeral, which gets exactly at my issue with it over the last few months; “[t]here’s no shared ‘front page’ to Google Drive and no Schelling Points for team members to gather around, so all documents are temporary working documents (and doing otherwise is pushing water uphill).”

Conspiracy walls, memory palaces, map rooms. Matt his focused on the collaborative aspect, and I certainly agree with the need, but I also love these ideas of placing/visualising knowledge for better understanding, pondering, and sense making, even for personal use.

My current ‘office’ is a corner of the living room so I don’t really have the space for a ‘crazy wall,’ but even when I did I always missed an integration with the digital. Then on computer, I miss the tactility and space. I ‘only’ have a 27” screen and even with the endless canvases that some apps now have, it lacks the right feeling.

Webb focuses on a remote collaborative space with projections for everyone, which is not the same use case, but I’m surprised nonetheless that he didn’t mention Dynamicland. As fantastic as that system looks, it’s still relatively early days and seems somewhat bulky when compared to index cards and post-its (or pages of a magazine’s flat plan). A digital garden full of notes doesn’t do it either, by the way. In other words, great exploration in that post but we are definitely not there yet.

Licklider had analysed his own thinking process, and discovered that 85% of the work was searching, calculating, plotting, transforming, and so on… bureaucratically “preparing the way”. As Engelbart put it: Every process of thought or action is made up of sub-processes – and that bureaucratic work is tractable to computed-aided support. […]

My learnings were that (a) maps should be authored not generated automatically; and (b) the map is a separate and just as valuable artefact as the territory that it maps. […]

Something software-enabled, something multiplayer, something that embraces “hybrid” so we don’t have to be either all in one geographic location or all at home.
I think embodiment matters here?

We all have that projector: me in my home office, and you in yours, and the others in the office meeting room. We navigate the map with gestures. It shows the same view for everyone. We don’t all need the identical physical setup – the projector can be large or small or point at any wall. For looking closer, we use our phones and tablets. Those are individual.

Computers enable fantasies

At the great LibrarianShipwreck, Z.M.L looks at the “continued relevance of Weizenbaum’s warnings.” I hadn’t heard of him and was happy to discover his thinking and his various interventions explaining how ‘we’ follow computers and AI too easily, not thinking deeply enough about what “ought to be made” instead of jumping straight into what “can be made,” never mind taking time consider who should have a say in these matters.

Humanity seems to often miss the nice middle that would have been better for its own good, Weizenbaum’s thinking shows how we’ve done so with computers. A couple of years ago I wrote Just Enough, which was about stopping at enough, finding what is sufficient for the job without going to eleven (or even eight). Most of the examples in there are, of course, based in the fact that ‘we’ went much further than enough.

I tend to think of generative AI, as it stands now anyway, as kind of on the other side of enough. Bad music, so-so images, passable answers in chat, passable articles, etc. A good chunk of what is produced by AI takes the place of something of better quality, made by someone, and replaces it with a ‘good enough’ option (I’m aware I’m doing that with some of my header images). We’re at risk of scrapping a whole lot of human work in favour of generative AI which might get to great, but might also stay simply good enough.

[E]ngaging with past critics of technology is a reminder that many decades ago there were some who saw the direction we were heading in—their work is a reminder that technology does not drive history, people drive history, though oftentimes those people driving history have started to worship technology. […]

“The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!” […]

It makes a fair amount of sense for commentary to narrowly focus in on a specific platform/company/individual; however, such a focus also carries a risk of placing all of the focus on that platform/company/individual while sparing from attention the underlying computer technologies. […]

[I]t is not sufficient to see that the computer is not the solution to every problem, but for us to see that the computer (and the faith in it) is part of the problem.

Imagination Infrastructure — what do we mean?

Last week I mentioned that I felt like I was “missing a critical detail in completely understanding this concept [of imagination infrastructure].” This quite well referenced piece by Olivia Oldham goes a long way in filling in some of the blanks. Basically I think I got it with my definition of the space it occupies, although now I might also say they are working on a grassroots futures that overlaps (takes inspiration from?) futures studies and critical futures. I’ve got a file with some more links on the topic, which I might turn into a post soonish.

Given the inherent emergence of futures — both imaginary and realised — imagination infrastructures need to be understood as verbs, not nouns, actions, not things — processes of creation in a constant process of becoming. […]

The way the future is imagined is inherently selective, because the future is inherently unknowable. Anything could happen, so the things we choose to imagine must necessarily be a subset of what is possible. […]

In particular, imagination is often better-suited to dealing with questions of the future than other, more concrete and deterministic approaches, such as forecasting and prediction, as it enables explorations to be more open-ended, allowing for questions to be left open, rather than mandating answers.

Accelerating transition by concentrating on positive tipping points. Andrew Curry summarises a report that tries to explain how to trigger a cascade of tipping points to accelerate the net zero transition. Intriguing. “Mandating zero emissions vehicles. Mandating green ammonia use in fertiliser production. Redirecting public procurement to promote the uptake of alternative proteins.”

Who owns the generative AI platform? I try to stay away from a16z, but this one is worth a read if you want to start understanding where generative AI business models might be heading. (Very turbo-capitalist scale/moat/extract.) “There don’t appear, today, to be any systemic moats in generative AI. As a first-order approximation, applications lack strong product differentiation because they use similar models; models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.”

Futures, foresights, forecasts & fabulations → “Worlding is an emergent field at the intersection of five distinct disciplines, co-creation, climate futures, real-time 3D game engines, documentary storytelling, and land use planning.” Videos from the inaugural WORLDING workshop. ⊗ On the same topic: Using game engines and “twins” to co-create stories of climate futures. ⊗ Science Fiction as a Futurist Tool at Farsight.