Note — Nov 11, 2018

Artificial Intelligence Hits the Barrier of Meaning

Seen in → No.56

Source → nytimes.com/2018/11/05/opinion/artificial-i...

I’m a bit disappointed that Professor Mitchell, the author of the article, uses examples that are kind of anecdotal because the rest of the piece is quite strong and hits on an important current limitation of AI; it recognizes patterns, it doesn’t really understand anything.

Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. […]

While some people are worried about “superintelligent” A.I., the most dangerous aspect of A.I. systems is that we will trust them too much and give them too much autonomy while not being fully aware of their limitations. […]

But ultimately, the goal of developing trustworthy A.I. will require a deeper investigation into our own remarkable abilities and new insights into the cognitive mechanisms we ourselves use to reliably and robustly understand the world. Unlocking A.I.’s barrier of meaning is likely to require a step backward for the field, away from ever bigger networks and data collections, and back to the field’s roots as an interdisciplinary science studying the most challenging of scientific problems: the nature of intelligence.