Feral Minds ⊗ General Purpose AI and Harm ⊗ Understanding 5 Key Phrases

No.296 — The rise of techno-authoritarianism ⊗ A new global gender divide is emerging ⊗ The question is, what is The Question? ⊗ Matriarchal Design Futures

Feral Minds ⊗ General Purpose AI and Harm ⊗ Understanding 5 Key Phrases
Japan, Fukushima City, Forest for Birds (Kotori no Mori).

Feral minds

Explores the relationship between language and consciousness in humans, starting with the case of Victor, a boy who grew up without language, and how it impacted his cognitive abilities and understanding of the world. The author then considers if we can glean some insights into how things might develop with LLMs.

He had me thinking that if human languages are a form of code, and the larger LLMs have a nontrivial slice of all human knowledge ‘encoded’ into their models, could this code, almost as a second order effect, produce consciousness? I’m assuming no, but that much knowledge in one ‘receptacle’ is not something we’ve seen before. Could something emerge that we haven’t thought of?

Put another way, before LLMs, a lot of AI was programmatic, basically coding all the potential interactions, questions, replies, actions. That never panned out because it’s too large an endeavour with too many branches to the tree. But what if all (or a big enough slice) of these options that developers previously had to think of, all the potential replies and variations of actions, are now (though not explicitly) in an LLM’s corpus? Could it kind of self-organise into thinking? Again, I’m guessing no, but what intrigues me, more than wondering what’s the next trick a coder will think of, it wondering about what might emerge.

And even if the answer to both of the above is a resounding NO and there are hundreds of things I haven’t thought of (very likely), what about 10 version down the road?

Among developmental scientists, theory of mind, like mental synthesis, is viewed as a key function of consciousness. In some ways, it can be understood as a kind of cognitive prerequisite for empathy, self-consciousness, moral judgment and religious belief — all behaviors that involve not only the existence of a self, but the projection of it out into the world. […]

As these models evolve, it increasingly appears like they are arriving at consciousness in reverse — beginning with its exterior signs, in languages and problem-solving, and moving inward to the kind of hidden thinking and feeling that is at the root of human conscious minds. […]

In the early 20th century, a group of American anthropologists led by Edward Sapir and Benjamin Whorf posited that cultural differences in vocabulary and grammar fundamentally dictate the bounds of our thought about the world. Language may not only be the thing that endows AI with consciousness — it may also be the thing that imprisons it. What happens when an intelligence becomes too great for the language it has been forced to use? […]

Like Samantha, the autonomous LLMs of the future will very likely guide their development with reference to unfathomable quantities of interactions and data from the real world. How accurately can our languages of finite nouns, verbs, descriptions and relations even hope to satisfy the potential of an aggregate mind?

Is it possible for general purpose AI to do no harm?

Another one on AI, this one by Rachel Coldicutt who looks into developers doing things just because they can, what “AI ethics” even means, general purpose, the risks of easily adaptable AIs, and the importance of community involvement. Well worth a read beyond my too short summary but I wanted to draw your attention to “general purpose” vs “general intelligence.”

For the former, Coldicutt uses the European AI act’s definition, “AI systems that have a wide range of possible uses, both intended and unintended by the developers. They can be applied to many different tasks in various fields, often without substantial modification and fine-tuning.” AGI or Artificial General Intelligence is habitually used to describe an “hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform.”

I think it’s an important distinction that I haven’t seen expressed that often. People talk about ChatGPT and then in the next phrase about Skynet. General purpose AI, as she explains very well in her piece, is a stage of SALAMI development that should be taken more seriously and is useful as a description of something more advanced than ChatGPT but not a super-intelligence either.

Irresponsible technology development has become the norm - and that doing things because you can, because they are interesting and possible, has become valorised as “innovation”. As I’ve said hundreds of times before, solving this is a social problem not just a technical or a legal challenge. […]

The real issue here is the development of a culture in which it has become normal for some technologists and technology companies to ignore the basics of tenets of being a moral and responsible human being. […]

While brokerage and international accords are important, the fundamental problem is that we also need the tech industry to grow up and develop a sense of moral responsibility. […]

It is not the existential risks that AI may or may not create that concerns me; it’s how the tools we already have could be weaponised without restraint.

The future of the planet hinges on understanding these 5 key phrases

We often talk about fighting climate change, one of the battles is waged with words, trying on one side to protect vested interests and even open business opportunities, and on the other to curtail the fossil fuel industry and pressure/force governments into action. Here Rebecca Leber of Vox explains how understanding some key phrases is essential in addressing the climate crisis and in understanding what’s going on. What “unabated” means exactly in this context, and the difference between Carbon Capture and Storage (CCS) and direct air capture are especially useful.

To count as abated, a fossil fuel-reliant plant would need to use technology that captures carbon emissions before they escape into the atmosphere. This is called carbon capture and storage (CCS). […]

Carbon capture and storage helps industries avoid pumping as much carbon dioxide into the atmosphere as they would otherwise, while direct air capture removes the greenhouse gas from the air. It’s a subtle but pretty important difference. […]

Global leaders have reaffirmed the principle that rich countries should help poorer nations repeatedly in UN texts since then, but many key questions remain at a stalemate: Who should be paying into funds to help vulnerable nations? What counts as a particularly vulnerable country? And are affluent countries obligated to pay or should they do it of their own volition?

§ The rise of techno-authoritarianism. I didn’t feature this one because we’re probably all a bit tired of reading about Silicon Valley’s “ascendant political ideology,” but it’s still a good one. “Our children are not data sets waiting to be quantified, tracked, and sold. Our intellectual output is not a mere training manual for the AI that will be used to mimic and plagiarize us. Our lives are meant not to be optimized through a screen, but to be lived—in all of our messy, tree-climbing, night-swimming, adventuresome glory.”

§ A new global gender divide is emerging. Tired: city vs rural divide. Wired: gender divide. “The clear progressive-vs-conservative divide on sexual harassment appears to have caused — or at least is part of — a broader realignment of young men and women into conservative and liberal camps respectively on other issues.”

§ The question is, what is The Question? Jon Evans wonders if AI has plateaued making it… not the most important question in the world. “But if it’s no, if instead AI is in the midst of something more like a stepwise series of S-curves, as I've long strongly suspected, then … well, then we’re in one of those ambiguous confusing eras when the most important question in the world is not at all obvious, as per most of human history. (Other strong contenders: ‘Will China and the US wage war?’ ‘Will climate change be stopped in time to prevent climate catastrophes?’ ‘When will sub-Saharan Africa get rich?’ ‘Will the next pandemic be natural or artificial?’ ‘Will aging turn out to be a solvable bug / series of bugs?’ ‘Will anyone go nuclear?’)”

§ Matriarchal Design Futures. “A non-capitalistic, non-hierarchical pedagogical framework centering the practices and values of caregiving and nurturing, which holds for all identities: for caregivers, mothers, those who are not mothers, women, men, and nonbinary alike.”


  • 🤬 📹 🇪🇺 EU set to allow draconian use of facial recognition tech, say lawmakers. “The German member of the European Parliament said the final text of the bloc’s new rules on artificial intelligence, obtained by POLITICO, was ‘an attack on civil rights’ and could enable ‘irresponsible and disproportionate use of biometric identification technology, as we otherwise only know from authoritarian states such as China.’”
  • 🤯 🏈 🎶 🇺🇸 Taylor Swift Conspiracy Theorists Get Psyops All Wrong. “Some prominent right-wing commentators say the relationship between Taylor Swift and Kansas City Chiefs tight end Travis Kelce is a ploy to keep President Biden in power. Psyop experts think otherwise.”
  • 🔊 🌳 🌳 😌 tree.fm. “Tune Into Forests From Around The World. Escape, Relax & Preserve. People around the world recorded the sounds of their forests, so you can escape into nature, and unwind wherever you are. Take a breath and soak in the forest sounds as they breathe with life and beauty!”
  • 😂 💕 🎥 Art collective MSCHF is streaming movies like Barbie in ASCII for free. “The art collective MSCHF is stirring up some trouble on the internet again. For its latest project, ASCII Theater, the group will broadcast a popular new film daily in ASCII format that anyone can watch for free. Just paste the command on your Mac or PC’s terminal, and you can watch films like Barbie exactly as, well, virtually no one has intended.”
  • 🎥 🇲🇬 🗻 What’s Inside This Crater in Madagascar?. “This video is great for so many reasons. It’s a story about geology, cartography, globalization, the supply chain, infrastructure, and the surveillance state told through the framework of falling down (waaaay down) an online rabbit hole. It reinforces the value of academics and the editing is top shelf.”
  • 😍 📸 🍄 🐦 🌳 The 2023 Wildlife Photographer of the Year Reveals the Most Magnificent Animal Behavior.

Your Futures Thinking Observatory