Ruha Benjamin on DEI, but not really ⊗ Media codes and how to resist them ⊗ The end of search, the beginning of research

No.344 — Major AI copyright case in the US ⊗ Toolkit for applied strategic foresight ⊗ The Anthropic Economic Index ⊗ The Forest Speaks

Ruha Benjamin on DEI, but not really ⊗ Media codes and how to resist them ⊗ The end of search, the beginning of research
Robotic philosopher, created with Midjourney.

Please don’t take Sentiers for granted! It takes a lot of work and quite a bit of time to write every week. If you have the means, I’m running a “flash membership drive,” to find 25 new members who’ll benefit from 25% off. (Only on the yearly Members tier.)


Ruha Benjamin on DEI, but not really

I completely loved this interview, Trevor Noah (and Christiana Mbakwe-Medina who never seems to appear in descriptions of episodes?) talk with the great Ruha Benjamin. The podcast episode is titled RIP… D.E.I., which is a bit weird. They do start with the acronym, the programs, but then it’s actually about diversity, equity, and inclusion, not the successful initiative, not the unsuccessful ones, not the posturing, not the Truskian crazyness, but rather those three things as they are lived in society.

And whether you are interested or not in those three things, have a listen, their exchange goes much deeper than that label or those words, and touches to a lot of topics, like racism, education, imagination, systemic issues, social hierarchies, communities, nation states, oppression, language, and more. It’s critical thinking, it’s about who’s forgotten, who’s ignored, who doesn’t have a voice. Again, it’s so broad that I still don’t think their title was correct.

On the technological side, Benjamin is deeply versed in the field, especially “the relationship between innovation and inequity” and they have an excellent exchange about AI, its pitfalls and risks, but also hopes and possibilities. (That part starts at 44:17).

There are a couple of spots where I think Benjamin might have disagreed with Noah and they could have gone deeper to resolve that, but still a great conversation which, as my list above shows, traveled a lot of the topics I cover here. Final note, and final highlighted quote below, they don’t specifically talk about foresight or futures (maybe one phrase), but the conclusion is very much “imagine and build the futures you want.”

Quotes below pulled from the Reader transcript and lightly edited.

[Ed. On other kinds of AI] Ancestral intelligence is one thing that I think of, as an important type of AI that has to do with collective wisdom, know-how, the insights, and experiences of people who have to learn how to navigate the underside of society, who are constantly buried under the rubble of so-called progress. So there are kinds of knowledge that grow in that rubble that are often discounted as backwards and no longer needed. … The other type of AI is abundant imagination. Going back to thinking that often the artificial type of AI is displacing our ability to actually use our imaginations and creativity rather than just plugging in prompts and getting the outputs. […]

A group of researchers said, okay, we understand this phenomenon. It’s getting reproduced in these systems. So what they did was train not on doctors. The official medical reports, but they trained the system to predict what a patient would say about their own experience of pain. So the intelligence was based on the patient's own self-report. […]

I feel tangibly that it’s possible to do things differently, even if the dominant culture of whatever industry or institution we’re in is moving in one direction. We have the power, especially when we band together and we work together to actually create a different way of doing things, perhaps like a seed that can grow and become a model for something that we want to develop over time.

The man who discovered media codes and how to resist them

Second essay in a series by Annalee Newitz, “based on an introductory media studies course taught in the spring of 2024 at the University of San Francisco.”

The article explores the pivotal contributions of Stuart Hall to media studies, particularly his focus on audience power in interpreting media. Hall introduced the concepts of “encoding” and “decoding,” proposing that the meaning of media messages is not fixed but shaped by the audience’s context and understanding. He identified three ways audiences decode messages: dominant (here Newitz also tackles Gramsci’s concept of “hegemony”), negotiated, and oppositional readings, showing that audiences could actively reinterpret media, gaining control over meaning, and challenging dominant discourses that often reinforce power relations.

As I mentioned, it’s part of a “series of letters about how to analyze the media during a communications crisis,” I’m sure you can understand why and I encourage you to follow the series.

Discourse is a collection of ideas in the public sphere that tells us who we are, what to believe, and how to act. (This is a simplification, but it works for our purposes here.) There are all kinds of discourses, large and small. MAGA is a discourse, for instance, as is skincare TikTok. Most of us are immersed in many different, occasionally contradictory discourses. […]

An oppositional decoding is when a receiver perceives the truth behind the discourse — the truth of power relations. It’s the moment when you’re watching a politician on TV say “freedom,” and you realize that the politician only means freedom for white, cisgender people. Or when you’re watching a supposedly wholesome teen comedy and realize that it’s incredibly racist. […]

People have to have a language to speak about where they are and what other possible futures are available to them. These futures may not be real; if you try to concretize them immediately, you may find there is nothing there. But what is there, what is real, is the possibility of being someone else, of being in some other social space from the one in which you have already been placed.

The end of search, the beginning of research

Ethan Mollick gives a useful overview of the convergence of autonomous AI agents and powerful Reasoners, and how they lead to advanced systems like OpenAI’s Deep Research that can perform complex research similar to how human experts might work, but at remarkable speed.

Reasoners (OpenAI o1, DeepSeek-R1) improve AI problem-solving by automating the reasoning process, allowing for more effective and nuanced outputs. Despite current limitations in general-purpose agents like OpenAI’s Operator, the development of specialised agents like Deep Research illustrates the potential for AI to perform complex research tasks quickly and efficiently. As AI technology continues to evolve, we may soon see these systems transition from narrow applications to more autonomous digital workers capable of a wider range of tasks.

Most of these products are behind pro accounts, some quite expensive. To give it a try, Perplexity.ai have integrated Deep Research in their search product and non-subscribers “have access to a limited number of answers per day.” For those with more of a technical bend, there are already a number of open source copies of this idea, most using the o1 or o1-small API and combining it with web scraping APIs but I don’t have one to recommend yet.

Reasoning models show you can make AIs better by just letting them produce more and more thinking tokens, using computing power at the time of answering your question (called inference-time compute) rather than when the model was trained. […]

But narrow isn’t limiting - these systems are already capable of performing work that once required teams of highly-paid experts or specialized consultancies. […]

These experts and consultancies aren’t going away - if anything, their judgment becomes more crucial as they evolve from doing the work to orchestrating and validating the work of AI systems.

More → Another piece on Deep Research, this one by Ben Thompson. As he usually does, he spends quite a bit of time quoting his past essays, and then veers into “knowledge value.” Stuff that isn’t easily found online is invisible to AIs and Thompson wonders about the added value to such content. It’s a valid question, it just feels to me like he jumped on that before explaining more of the rest, which is why I featured the other piece. Anyway, he does overlap with / reinforces something Mollick mentions in passing: “Google surfaces far more citations, but they are often a mix of websites of varying quality (the lack of access to paywalled information and books hurts all of these agents).”

It reminded me of another of Thompson’s older pieces, on zero trust information, where he argued that validating the authenticity of the creator of something online would be easier than filtering out the fake content. Taken together, they hint at an internet where the most valuable information is beyond AI’s reach and creators are behind “proof walls” (or attached to some kind of blockchain) that validate authorship. The latter likely being the owners of the former.


§ Thomson Reuters wins first major AI copyright case in the US. Big! “ When determining whether fair use applies, courts use a four-factor test, looking at the reason behind the work, the nature of the work (whether it’s poetry, nonfiction, private letters, et cetera), the amount of copyrighted work used, and how the use impacts the market value of the original. Thomson Reuters prevailed on two of the four factors, but Bibas described the fourth as the most important, and ruled that Ross ‘meant to compete with Westlaw by developing a market substitute.’


Futures, Fictions & Fabulations

  • CIFS Toolkit for Applied Strategic Foresight. “This collection of tools and approaches has been carefully curated and refined based on our own extensive experience in the field of foresight”
  • Future of Water - Awareness Game. “The game consists of 112 insight cards on emerging issues, signals, current events, emerging tech, emerging science and more, providing users with a holistic overview on the current changes happening in the world of water. The game also consists of process cards, which allows participants to go through a 5-step process ending with the imagination of probable scenarios and the creation of artefacts of the future.”
  • Journalism, media, and technology trends and predictions 2025. “The main findings from our industry survey, drawn from a strategic sample of 326 digital leaders from 51 countries and territories.”

Algorithms, Automations & Augmentations

Built, Biosphere & Breakthroughs

Asides

Your Futures Thinking Observatory