Note — Jan 13, 2019

Hinton and Hassabis: AGI Is Nowhere Close to Being a Reality

A bit more detailed than usual (for a mainstream piece) overview of AI and machine learning with some good quotes by Hinton and Hassabis, downplaying some of the extravagant proclamations seen elsewhere.

Unlike the AI systems of today, he says, people draw on intrinsic knowledge about the world to perform prediction and planning. […]

“We don’t have systems that can … transfer in an efficient way knowledge they have from one domain to the next. I think you need things like concepts or extractions to do that,” Hassabis said. “Building models against games is relatively easy, because it’s easy to go from one step to another, but we would like to be able to imbue … systems with generative model capabilities … which would make it easier to do planning in those environments.”

In an older piece, Hassabis again. As I mentioned on Twitter, it feels like a much more attainable and important vision of AI than self-driving and face recognition for surveillance.

But with rigorous attention to programs’ capabilities, and more research into the effects of the quality of the data we use as inputs and the transparency of their workings, we may find that AI can play a vital role in supporting all manner of experts by identifying patterns and sources that can escape human eyes alone.

Links to this note