Note — Sep 04, 2022

AI and the Limits of Language

If you read one thing about AI this week, go back to the first feature above, but this one by Jacob Browning and Yann LeCun is also quite good if you are tracking the thinking around semantics, “intelligence,” embodiment, and the potential of AI. They say “it isn’t clear what semantic gatekeeping is buying anyone these days,” linking to ‘The mental model is the message,’ which I mentioned above, and which seems dismissive. It’s funny because in my opinion they are mostly on the same side of trying to find a way to talk about AI without getting stuck in erroneous parallels and fabulations. They aren’t saying the exact same thing, but pointing in the same direction.

[L]anguage doesn’t exhaust knowledge; on the contrary, it is only a highly specific, and deeply limited, kind of knowledge representation. […]

All representational schemas involve a compression of information about something, but what gets left in and left out in the compression varies. […]

It is thus a bit akin to a mirror: it gives the illusion of depth and can reflect almost anything, but it is only a centimeter thick. If we try to explore its depths, we bump our heads. […]

[T]he deep nonlinguistic understanding is the ground that makes language useful; it’s because we possess a deep understanding of the world that we can quickly understand what other people are talking about.