Part of IEEE Spectrum’s special report, The Great AI Reckoning, this one is largely based on an interview with Raia Hadsell, head of robotics at DeepMind, and his quite engaging on two fronts. First, as another piece where we can get a better understanding of how far anyone is from General Artificial Intelligence (see “catastrophic forgetting”). Second, despite those limitations, it’s fascinating to see the different techniques Hadsell’s team and others are using to make it possible for one neural network (or a specific combinations of a few) to learn multiple things one after the other while not forgetting what it learned before, hopefully with each skill feeding into the others.
[I]nstead of having lots of neural networks, each trained on an individual game, you have just two: one that learns each new game, called the “active column,” and one that contains all the learning from previous games, averaged out, called the “knowledge base.” […]
[T]he progress-and-compress model, Hadsell says, will allow an AI system to transfer skills from old tasks to new ones, and from new tasks back to old ones, while never either catastrophically forgetting or becoming unable to learn anything new. […]
"I have a fairly simplistic view of consciousness," she says. For her, consciousness means an ability to think outside the narrow moment of "now"—to use memory to access the past, and to use imagination to envision the future.