Again, much of the AI talk in here we’ve already seen, but worth a read for the useful parallel with geofoam and for a few other details. First, it’s another example that AIs just make shit up, it doesn’t differentiate between inventing a turn of phrase and inventing facts, it can spew out text that looks like a good approximation of an article, but the content might be false. Kind of an important detail.
Second, much of the improvements in recent years are due to the scaling of the data that goes into them, what happens when ‘everything’ as been used for a model and it’s still not good enough? Google has a 1.6-trillion-parameter model, at some point you have to think it’s getting redundant.
Third, even when people write critically about AIs, they still use words and phrasings like “thinking,” “didn’t understand,” “dialog.” Hard to debunk the premise of AIs having actual intelligence when you assign them human traits. Also, noting “low attention text” for later.
A reliance on scale, though, is inextricably linked to the statistical approach that creates uncertainty in these models’ output. These systems have no centralized store of accepted “truths”; no embodied understanding of “what the world is like for humans” and, hence, no way to distinguish fact from fiction or to exercise common sense. […]
This, I think, is why AI writing is so much more exciting than many other applications of artificial intelligence: because it offers the chance for communication and collaboration.