TL;DR: Venkatesh Rao proposes that Artificial Intelligence is actually Artificial Time, that it compresses decades, centuries of learning in hours and minutes and makes it available to us as augmentation. I.e. AlphaGo played millions of games in a short time.
This is definitely the “brain explodes” article of this issue. It might be argued that Rao has too optimistic a view of AI/AT and skips over some issues, I don’t know, but regardless it’s a great read on “simple algorithms and more data beat complex algorithms and less data,” what models and inference are, latent spaces, augmentation, latent Centaurs, etc.
It’s a fascinating theory that I know will come back to mind often in coming years, including for his take on the challenge of bias (have a read), as well as the getting more conservative as we get older. On this last one, I’d add that most people might get more conservative in their actions, like taking fewer risks and sticking to old habits, but that intellectually there is access to so much information, it’s actually possible to diversify your thinking a lot.
He also argues that we live very empty lives; if you take into account repetition and imitation, there is very little original data in each individual life, easy to compress. By contrast, one year of an AI “studying” a topic can be chock full of diverse data, hence making our one year of learning even smaller in comparison to theirs. Last thing; make sure to get to his Magnus Carlsen example to better grasp the concept.
If AI models cannot be reduced to human terms of reference, perhaps human thought can be expanded to comprehend computational terms of reference. Living in superhistory involves learning to do that. […]
The machine learning revolution has been driven by the development of a series of increasingly mathematically powerful frameworks (CNNs, transformers) that can digest increasing amounts of training data, with decreasing amounts of supervision, producing increasingly reliable inferences. […]
One way to think of this is: these AIs have already read vastly more text than I could in a thousand years, and digested it into writing minds (language models) that are effectively Ancient Ones. And their understanding of what they’ve digested is not limited by human interpretative traditions, or the identity insecurities of various intellectual traditions. […]
In many ways, I feel older than my father, who is 83. I know the world in much richer, machine-augmented ways than he does, even though I don’t yet have a prosthetic device attached to my skull. I am not smarter than him. I’ve just data-aged more than him.