Note — Jun 23, 2019

AI Can Thrive in Open Societies

Seen in → No.85

Source → foreignpolicy.com/2019/06/13/ai-can-thrive-in-o...

Bruce Schneier and James Waldo take aim at the “China is an unrestrained surveillance state, thus has more data, thus will kick everyone’s ass in AI” trope. They don’t look that much at US and Chinese societies but give a number of reasons why AI doesn’t / shouldn’t / won’t always require massive amounts of data to work, and argue that the way research in the field is done is not something you can just throw military money at, like nuclear. In issue No.80, commenting on A new way to build tiny neural networks could create powerful AI on your phone, I wrote the following, which seems to align quite well with the Schneier-Waldo piece: “The smaller networks possibility, paired with the advances in synthetic data, and the fact that some AIs can be based on much smaller data sets for training, points the way (I reckon) to AIs which would require smaller investments and be possible in more places / companies.”

Current machine learning techniques aren’t all that sophisticated. All modern AI systems follow the same basic methods. Using lots of computing power, different machine learning models are tried, altered, and tried again. … The different layers will try different features and will be compared by the evaluation function until the one that is able to give the best results is found, in a process that is only slightly more refined than trial and error. […]

All data isn’t created equal, and for effective machine learning, data has to be both relevant and diverse in the right ways. […]

Just adding more data may help, but not nearly as much as added research into what to do with the data once we have it. […]

AI is a science that can be conducted by many different groups with a variety of different resources, making it closer to computer design than the space race or nuclear competition.