Note — Apr 04, 2021

Why Computers Won’t Make Themselves Smarter

Prize-winning science fiction author Ted Chiang writing at The New Yorker presents a well thought-out and argued case against the idea of superintelligent AIs augmenting themselves and leading to the singularity.

Chiang considers multiple parallels, debunking the singularitarian thesis by showing that each of them are impossible, so why would we think AIs will be able to manage that trick? [Spoiler] It’s an excellent read if you’re interested in the topic but also worth considering alongside the next piece, since both end-up with a similar conclusion around collaboration, peers, how science work, and cognitive tools. You can also read it as addressing the brain aspect of the manufactured robots one more article down.

Humanity has developed thousands of such tools throughout history, ranging from double-entry bookkeeping to the Cartesian coördinate system. So, even though we aren’t more intelligent than we used to be, we have at our disposal a wider range of cognitive tools, which, in turn, enable us to invent even more powerful tools. […]

[Y]ou’re better off having a lot of people drawing inspiration from one another. They don’t have to be directly collaborating; any field of research will simply do better when it has many people working in it. […]

Some might call this phenomenon an intelligence explosion, but I think it’s more accurate to call it a technological explosion that includes cognitive technologies along with physical ones.