Preparing this issue I read a few pieces on ChatGPT, more specifically Ben Thompson, Arvind Narayanan and Sayash Kapoor and Ian Bogost, in order of preference. None of them were really what I wanted to feature on this topic. I tried ChatGPT quickly and was impressed-not impressed. One reply will initially blow your mind, then the next one will be so completely wrong that it utterly destroys any trust you might have had. It’s the Wizard of Oz doing something wondrous before proceeding to stumble down the stairs. It’s fun, and I’m sure some fun usages will pop up, and it will progress, but the thing doesn’t understand anything. Thompson has a good comparison with a calculator to explain deterministic vs. probabilistic output. OpenAI spews out a mix of text that is basically statistical (++). There is no understanding there at all.
That’s problematic because it goes from superb to nonsense but also because developers of AIs, and even people writing about them keep using words related to intelligence. That’s not just a semantic debate. How can we be expected to properly evaluate, test, and sometimes use these tools when the whole vocabulary currently used to describe them is erroneous? Messaging, capabilities, expectations, and discourse are all out of whack.
The second thing that annoys me in this whole story is the expectations it stems from. Just below, I point to an interview with Linus Lee, about the tools he builds with generative AI products. He’s not trying to get an intelligent assistant, he’s developing new tools to work and play with the material of text. That seems like a much better approach, and much more aligned with the level of these tools. This obsessions with creating ‘intelligence’ blinds us to simpler uses and leads down some wrong paths. One of the best ‘AI’ products I’ve most consistently used over the last couple of months is the autocompletion of comments in Google Docs. I’ve been editing and reviewing a few articles and the suggestions when commenting are excellent. There’s no hype, there’s no flashing AI bragging, just a commenting feature that helps me along.
Another great tool (although not as great this week) is the GPT-based Ghostreader I mentioned last week and use for the 🤖 Summaries. The reasons it works for me is that I don’t use it to not read an article, I use it to summarise something I’ve just read. I have the knowledge to validate what’s generated, so when it kind of sucks, I just rework it. Image generators are the same way, it’s easy to see what the image represents and just gloss over or try to tweak the mistakes. Expectations are adjusted by our previous knowledge, contrary to asking ChatGPT for something we don’t know, with no way to immediately figure out what’s bullshit. If the developers were not hoping to fake true intelligence, they might have integrated something visually to give some weight according to certainty. But you get fewer headlines and less investment that way.
A lot of AI is hidden, which is good. A lot of AI is hidden and it’s problematic, a lot of AI makes wild claims and that’s an issue. Two of the phrases I’ve used most in years of freelancing are ‘manage expectation’ and ‘under-promise, over-deliver.’ The whole AI ‘industry’ could stop scify-ing their ideas, start setting better expectations for themselves, and start putting forward more level-headed expectations out in public. They’d stop going down in flames when over-blown promises unavoidably aren’t met, and everyone could have smarter conversations about their products.