Dispatch — Sep 16, 2021

Intelligences

A few weeks ago, I had a couple of discussions where people asked me to name some of my current core interests. I said “not AI but AI.” By which I meant Augmented Intelligence over Artificial Intelligence, which is something I’ve written about a few times in the weekly but never expanded on. For this Dispatch, I thought I’d tie together a few things on that topic, and intelligence more generally.

By the way, I’m using the new website as a library / digital garden so when it’s something I’ve covered previously, I link to my note instead of the article directly, that way you have my commentary and chosen quotes, and can branch out from there. If you’d prefer direct links, tell me!

Augmentation

First of, I don’t believe we’re anywhere close to Artificial General Intelligence (the kind that people are scared of). Second, even without the G, more credibly useful and more interesting to track is when this new ‘intelligence’ augments what humans can do.

But with rigorous attention to programs’ capabilities, and more research into the effects of the quality of the data we use as inputs and the transparency of their workings, we may find that AI can play a vital role in supporting all manner of experts by identifying patterns and sources that can escape human eyes alone.
Demis Hassabis

A lot of AI already can—and, I’d argue, should—be considered as an augmentation. Sentencing algorithms and other decision packages for example, are hugely biased and often used as a decision-maker rather than a recommendation, as a way to unload responsibility and say “the software said so.” Beyond fixing all the bias, ethical, and privacy issues, they need to be reframed as helpers, not as some new superior instance that makes decision.

Algorithms that analyse tumours, radiologies, or provide any kind of diagnostics can be very useful when they are used to power through massive amounts of data to find patterns or exceptions, but at this stage Doctors still need to make decisions. See for example this horror story where health providers end up pushing Doctors into decisions by over-relying on an algorithm that erroneously sees signals in patients’ behaviour.

As is often the case, looking at art (visual arts and music for now) is a good place to start for indicators of where things might be going. First example, some artists already view AI as a new medium, one they are in interaction with.

But if you consider the whole process, then what you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists — one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art.

Marko Ahtisaari seems to agree and also uses the term “non-human intelligence.”

Art and AI is a much-hyped, poorly understood and little experienced area. The breakthroughs will come, I believe, from the centaurs, the artist(s) working together with non-human intelligences, not machines emulating styles or replacing human artists.

A couple of years ago I shared a piece by Clive Thompson at Mother Jones, What Will Happen When Machines Write Songs Just as Well as Your Favorite Musician? The gist of the piece is that AIs can reproduce the technical aspect but don’t have the cultural context, and can’t create something new. I wrote that “we often say that creative and collaborative jobs will be hardest to replace because of some unique human quality. Perhaps. That assumes AIs need to match the best (or very good) humans but they don’t really need to, do they? Most people are quite satisfied with good enough and perhaps good enough doesn’t need that much ‘uniquely human’ creativity.” Will there be a long tail type of distribution where much of music is by algorithms and the collaborative niches are done by centaurs? Will there be a ‘vinyl-revival-like’ connoisseur market for fully human music?

In written form, a lot can already be automated, like weather forecasts or sports box scores but so far, since algorithms don’t understand what they are doing and are simply mimicking, there’s always nonsense or at the very least an uncanniness that’s recognizable. Will that last? Who knows, but circling back to augmentation, there’s a great number of platforms to enhance your writing, offer prompts, accelerate writing, propose variations, etc. But they are often not markedly more interesting than the latest evolution of auto-complete and grammar check. More intriguing are ideas like Matt Webb’s GPT-3 is an idea machine:

Here’s what I didn’t expect: GPT-3 is capable of original, creative ideas.

Using GPT-3 doesn’t feel like smart autocomplete. It feels like having a creative sparring partner.

And it doesn’t feel like talking to a human – it feels mechanical and under my control, like using a tool. […]

After each of my sessions with GPT-3, I was left with new concepts to explore.

And:

It occurred to me that GPT-3 has been fed all the text on the internet. And, because of this, maybe it can make connections and deductions that would escape us lesser-read mortals. What esoteric knowledge might be hidden in plain sight? I can ask.

Now that’s something I’d be a lot more interested in then a better spell checker or writing prompts.

Just this week I loved Gordon Brander’s Notes are conversations across time for its explanation of the value of conversation and feedback loops but also because he says this:

A conversation can happen between yourself and yourself, across time, through the notes your past self took for your future self. An autopoietic system where information time travels between your future and past self in a meaningful cybernetic loop.

And then he talks about programmatic loops:

We can construct conversational feedback loops that help us learn a language, or give us programmable memory. We can construct conversational feedback loops that program creativity, or garden ideas from the bottom-up, or evolve ideas spontaneously.

As soon as you start thinking about automated loops and helping the formation of habits, then add conversations, you have to also start wondering about whether an algorithm might provide just enough ‘insight,’ and fetch just enough data from elsewhere (“here are some articles similar to what you just said”), to provide a useful partner to bounce ideas off of. In other words, not just provoking loops in your thinking through automated means but by advancing a conversation.

Going back to music, where I was wondering what is ‘just enough’ for AI music to be a useful simulacra, it’s the same here, you don’t need a perfectly smart Data from Star Trek, just a tool to get you to structure your thoughts. Even when followed by an imperfect rejoint, it can still be very useful, and perhaps bring divergent ideas that get you somewhere unexpected.

I deliberately kept coding for last because I’ve seen the headlines but haven’t spent any time on exploring the tools or even other people’s opinions. Lets just mention that much of what I’m saying about ‘creative writing’ is also being worked on with coding. I think they will be interesting to contrast, since coding is more structured but also more brittle, a broken phrase might grate the eye but a broken line of code doesn’t work. Repeating yourself in prose is a mistake or making a point, repeating yourself in code means you need a new function. Not to dismiss the creativity in code but code and maths writing code seems like a more natural fit than code and maths writing poetry. (And I’m saying this with a t-shirt in a drawer at home that says “code is poetry.”)

Keep in mind that language is often co-opted to obfuscate things, as Kelly Pendergrast warned us about, framing AI as a robot teammate can also be used to make us “accept or ignore the hidden labor of thousands of poorly paid and precarious global workers.”

Still, even when AIs get better, seeing them as assistants instead of replacements not only fits their level of development better (and will for a good long while), but sets more reasonable expectations of what they do or could potentially do in the short to medium term, and frames them as something helpful and less threatening. Which leaves, in my opinion, more room to discuss the real issues around bias and ethics.

Time

If AI models cannot be reduced to human terms of reference, perhaps human thought can be expanded to comprehend computational terms of reference. Living in superhistory involves learning to do that.

Last example of augmentation, which I’m separating from the rest because I think it’s an important one but also because it’s as much a reframing of Artificial Intelligence as a way of augmenting some aspect of our own. Venkatesh Rao proposes that we should talk about superhistory, not superintelligence. AIs right now are not really intelligent, they are however a compression of massive amounts of data, usually over years, we can see various models as compressing years of experience and making it accessible.

In his best example, he argues that since chess World Champion Magnus Carlsen learned by playing against AI, he was training with different intelligences, ones trained on decades of games.

… he was also “older” in a weird way, despite being nominally 21 years younger. Carlsen is young enough to have been effectively “raised by AIs” — the most sophisticated chess AIs available on personal computers when he was growing up in the aughts. His playing style was described as kinda machine-inspired, pushing hard all the way through the end, exploring unlikely and unconventional lines of play where human tradition would suggest conceding.

In other words, viewing AIs not as pale copies of our intelligence, but as compressed knowledge. Or perhaps compressed information edging closer to compressed knowledge. (In a sequence like Data -> Information -> Knowledge -> Wisdom. Or maybe Clarke’s scale which adds -> Foresight.)

Companions

The first person to nudge me towards seeing AIs as another form of intelligence instead of a copy, was Matt Jones over ten years ago with the BERG-germinated idea of a goal / profile / use of AI that aims to B.A.S.A.A.P., Be As Smart As A Puppy (that’s his original post, I also recommend this talk and there are a few more references grouped here). There is much that a machine can do for us without needing to be as intelligent as we are. It just needs some intelligence, familiarity with our spaces and needs, and an ‘intent’ to help with a specific set of tasks.

BASAAP is my way of thinking about avoiding the ‘uncanny valley‘ in such things. Making smart things that don’t try to be too smart and fail, and indeed, by design, make endearing failures in their attempts to learn and improve. Like puppies. […]

Each of them working across a little domain within your home. Each building up tiny caches of emotional intelligence about you, cross-referencing them with machine learning across big data from the internet. They would make small choices autonomously around you, for you, with you – and do it well. Surprisingly well. Endearingly well. […]

That might be part of the near-future: being surrounded by things that are helping us, that we struggle to build a model of how they are doing it in our minds. That we can’t directly map to our own behaviour. […]

Non-human actors in our home, that we’ve selected personally and culturally. Designed and constructed but not finished. Learning and bonding. … New nature.

Last year Alexis Lloyd wrote one of those articles I keep referring back to, R2D2 as a model for AI collaboration. Based on a talk she gave at the Eyeo conference in 2016, she proposes C3PO, Iron Man, and R2D2 as three frameworks for how to design for AI. Lloyd’s hypothesis is that the anthropomorphic model for robots is a skeuomorph because “we haven’t developed new constructs for machine intelligence yet.” The ‘robots taking our jobs’ trope feeds directly into (or from) that hole; with no better model, we take the only intelligence we care about, and apply our fears and defects to its copy. C3PO is a stereotypical sci-fi robot, the Iron Man suit is the ultimate augmentation, but perhaps the humble R2 is the most useful form, clearly intelligent, yet different and with its own language.

As we design interactions with these kinds of machine intelligences, what are their versions of R2D2’s language? What expressions feel native to their processes? What unique insights can we gain from the computational gaze? […]

Let’s not let the future of AI be weird customer service bots and creepy uncanny-valley humanoids. Those are the things people make because they don’t have the new mental models in place yet. They are the skeuomorphs for AI; they are the radio scripts we’re reading into television cameras.

Just this year, Kate Darling came out with a book on a related topic (this article-length teaser is very good). Starting from how animals have been used to augment ourselves, she goes on to explain how our obsession with creating AIs based on our own brains and then wanting / fearing our replacement is misguided, we should think of that kind of intelligence as other, with its own strengths and benefits. AI to help and augment.

Despite the AI pioneers’ original goal of recreating human intelligence, our current robots are fundamentally different. They’re not less-developed versions of us that will eventually catch up as we increase their computing power; like animals, they have a different type of intelligence entirely. […]

[T]he main thing I want to argue is that, contrary to our tech-deterministic beliefs, we actually have some control over how robots impact the labour market. Rather than pushing for broad task automation, we could invest in redesigning the ways people work in order to fully capture the strengths of both people and robots. […]

[W]hen we broaden our thinking to consider what skills might complement our abilities instead of replacing them, we can better envision what’s possible with this new breed.

Other intelligences

This angle might not resonate as much with people solely interested in the technology behind AIs but in a more holistic view, I find it fascinating to parallel these potential programmed intelligences with the natural intelligences we still barely understand. Whether they be animals (other primates, dogs, dolphins, crows, pigs, elephants, squids), plants, or fungi. In the same kind of hubris through which humans have named a pseudo-epoch for themselves, the anthropocene, ‘we’ also see ourselves as the pinacle of intelligence. It’s not only a more realistic but also more humble and informative perspective to consider ourselves as one of many forms of intelligence, and take a posture where we can learn from these other forms.

[T]he idea of a “functional biomorphic computing device.” Unconventional Computing is the “unorthodox hybrid of computer science, physics, mathematics, chemistry, electronic engineering, biology, material science and nanotechnology.” They study the intelligence and computing shown by slime molds as well as mycelium and fungi to discover mechanisms of information processing in physical and chemical living systems which could be leveraged for our own purposes. [Source: Beyond Smart Rocks]

For more in this direction, have a look at my notes tagged with ‘fungi’ or ‘trees’. One example, Suzanne Simard on the intelligent forest:

Mycorrhizal fungi are generalists — they colonize plant root tissue, sometimes even intracellularly. They might invest in many tree species to hedge their bets for survival, and the off chance that some carbon would move to a stranger was simply part of the cost of moving it to relatives. […]

Rather than biological automata, they might be understood as creatures with capacities that in animals are readily regarded as learning, memory, decision-making, and even agency.

The unexplored

Perhaps the variety of intelligences is like the oceans, something right in front of us, that we often disregard and even forget about, but which is actually largely undiscovered, unobserved, yet fascinating when you do take the time.

Not to take away from Mars and deep space, but it could be that we have more to gain from closer targets. We are ignoring other places on this planet, as well as other creatures with their own intelligence, while focused on trying to reinvent our own. There are simpler, likely more useful forms to be explored and collaborated with.