Note — Sep 18, 2022

Rethinking Intelligence in a More-than-Human World

I almost skipped over this one by Amanda Rees at Noema Magazine because although it’s a ‘cluster of topics’ (human intelligence, synthetic intelligence, biological non-human intelligence) I’m paying attention to, I’ve also featured a number of pieces here that are quite adjacent. That would have been a mistake, Rees doesn’t only look at proofs of intelligence in animals, although there is some of that, and she doesn’t only look at how to consider Artificial Intelligence, although there is some of that too. Instead, she focuses on how our view of human agency and intelligence came to be, what this view misses, and how it introduces errors and biases in how we then consider animal and synthetic intelligence. Rees goes over European Enlightenment, “eugenicist Francis Galton,” meritocracy, and how “elite European scholars and gentlemen” used “their own experience as a basis for their studies.”

Beyond the established view of intelligence, what of emotion, play, learning, and stories? Even beyond these angles on how our brains work, Rees also proposes considering collaboration, alliances, plants, multispecies agency, and companion species. Good read, and I’d suggest also going through the archives for Kate Darling on robots as animals, and the old but sadly not broadly used idea of using BASAAP (Be As Smart As A Puppy) to frame expectations of AI.

If we are in fact to be “wise,” we need to learn to manage a range of different and potentially existential risks relating to (and often created by) our technological interventions in the bio-social ecologies we inhabit. We need, in short, to rethink what it means to be intelligent. […]

But again, the idea of what constitutes “intelligence” closely resembles the earlier 19th-century model of rational, logical analysis. Key research goals, for example, focus on reasoning, problem-solving, pattern recognition and the capacity to map the relationship between concepts, objects and strategies. “Intelligence” here is cognitive, rational and goal-directed. It is not, for example, kinesthetic (based on embodiment and physical memory) or playful. Nor — despite the best efforts of Rosalind Picard and some others — does it usually include emotion or affect. […]

All these debates about intelligence and the human future are based on the assumption that intelligence is fundamentally rational and goal-directed — that is, that the 19th-century understanding of the concept is still the most appropriate interpretation of what it is. What if it isn’t? And what about agency? What if agency isn’t self-conscious, or even based in an individual? […]

Fairy tales may well prove more useful than factor analysis in understanding human agency in the Anthropocene. This is because stories are vitally important in both explaining and expanding an individual’s understanding of a situation. Particularly in the past decade, the West has seen how stories (myths, post-truths, history) help form collective community identities, which can sometimes exacerbate inter-community tension.