Note — Sep 15, 2019

Face Recognition, Bad People and Bad Data

Seen in → No.94

Source → ben-evans.com/benedictevans/2019/9/6/face-r...

Ben Evans thinking about face recognition, people, and data. I’m recommending it as a useful read for two reasons; first because he does explain a number of angles quite well, as well as the questions to be asked and the perceptions of the public on what is ok and what is questionable. Second, because it should be read with an eye to his (likely correct) view that ever cheaper cameras and AI on the edge will mean smart cameras appearing in lots and lots of things (see his computer vision archive). However, not unexpectedly, he uses the “we” a lot (I do that too, trying to correct), without defining it and without spending time on the diversity of populations, needs, privileges, and worries. Basically; good explanations, good questions, but this will need to happen for a much wider spectrum of people and lives.

It’s just doing a statistical comparison of data sets. So, again - what is your data set? How is it selected? What might be in it that you don’t notice - even if you’re looking? How might different human groups be represented in misleading ways? And what might be in your data that has nothing to do with people and no predictive value, yet affects the result? […]

But machine learning doesn’t give yes/no answers. It gives ‘maybe’, ‘maybe not’ and ‘probably’ answers. It gives probabilities. So, if your user interface presents a ‘probably’ as a ‘yes’, this can create problems. […]

But, just as we had to understand that databases are very useful but can be ‘wrong’, we also have to understand how this works, both to try to avoid screwing up and to make sure that people understand that the computer could still be wrong. […]

There’s something about the automation itself that we don’t always like - when something that always been theoretically possible on a small scale becomes practically possible on a massive scale.