Face recognition, bad people and bad data / Risks in econ. model of climate change / Quantum algo may be a property of nature / Hidden cities

I’ve been wondering what to do for issue No.100 and I missed this last week: it’s been two years of Sentiers!! 🥳🎉😅

For the last too many weeks, I’ve been pondering a paying version of Sentiers, twisting and turning (and twisting and turning, and tw…) what it should entail, at what price, and when to launch it. One problem is that I keep thinking that people don’t need more stuff to read, they need better stuff. But at the same time, that goes counter to what everyone else is offering, changing minds is a big challenge, thus I’d like to go with a bit more things that matter. Also, considering the small share of readers who usually pay for subscriptions, it might take a while to be really worth the time to write more… but it’s the kind of thing I enjoy doing.

But paid newsletter are multiplying so I’m going ahead now. I’m keeping the free for everyone weekly in the same format. Paying subscribers will get two more “things” a month. Depending on inspiration and opportunities, it will be a mix of longform articles from me or collaborators, interviews, and special dispatches of Sentiers focused on one topic (exactly the same format as what you are reading today but on one thing at a time, like AI or soft cities, or design, …).

Two years ago, I launched Sentiers as my solo, minimum viable but scalable version of our old The Alpine Review print mag, and I view this new subscription model as a streaming publication. Paying subscribers finance longer form content and more research, the output will grow as paying membership does. The next stages will be sending an additional piece a week instead of twice a month, then a quarterly zine-ish compilation, and eventually a yearly, potentially printed year in review. Details will be hammered out with your input, and Founding members (see below) will be able to peek in and influence more readily. You can subscribe now, extra content starts going out in October.

(If you are already a Patron, you got a deal! You’ll be switched to the new model, no charge.)

This week: Face recognition, bad people and bad data / The deadly hidden risks within the most prominent economic model of climate change / An important quantum algorithm may actually be a property of nature / On “AI” replacing jobs and humans / Hidden cities

A year ago:For safety’s sake, we must slow innovation in internet-connected things (Bruce Schneier).

Face recognition, bad people and bad data

Ben Evans thinking about face recognition, people, and data. I’m recommending it as a useful read for two reasons; first because he does explain a number of angles quite well, as well as the questions to be asked and the perceptions of the public on what is ok and what is questionable. Second, because it should be read with an eye to his (likely correct) view that ever cheaper cameras and AI on the edge will mean smart cameras appearing in lots and lots of things (see his computer vision archive). However, not unexpectedly, he uses the “we” a lot (I do that too, trying to correct), without defining it and without spending time on the diversity of populations, needs, privileges, and worries. Basically; good explanations, good questions, but this will need to happen for a much wider spectrum of people and lives.

It’s just doing a statistical comparison of data sets. So, again - what is your data set? How is it selected? What might be in it that you don’t notice - even if you’re looking? How might different human groups be represented in misleading ways? And what might be in your data that has nothing to do with people and no predictive value, yet affects the result? […]
But machine learning doesn’t give yes/no answers. It gives ‘maybe’, ‘maybe not’ and ‘probably’ answers. It gives probabilities. So, if your user interface presents a ‘probably’ as a ‘yes’, this can create problems. […]
But, just as we had to understand that databases are very useful but can be ‘wrong’, we also have to understand how this works, both to try to avoid screwing up and to make sure that people understand that the computer could still be wrong. […]
There’s something about the automation itself that we don’t always like - when something that always been theoretically possible on a small scale becomes practically possible on a massive scale.

Hidden cities

Super interesting reflection by Nadia Eghbal on how surges of people and general overcrowding affect offline and online communities. (Found a couple of places, including Tom and Brendan’s Networked Communities blogchain.)

Blasting a fragile ecosystem with a firehose of outsiders, even if they have good intentions, can destroy its essence. […]
One tweet, post, or article from someone with a big enough audience is like sunlight on a magnifying glass, concentrating screaming hordes of people onto a single, unsuspecting point that spontaneously combusts into flames. I can’t quite blame some communities for wanting to avoid press, even good press. […]
I don’t want to suggest that we should resist change entirely. I do think we can be thoughtful about the rate of change that we introduce. I also think it’s a choice, rather than an inevitability, to drop bombs that throw an entire ecosystem off-balance.

An important quantum algorithm may actually be a property of nature

This is your mind-blowing (expanding?) read for the week. In 1996 the physicist Lov Grover came up with a quantum algorithm to greatly reduce the time required for searching through a database of N entries. Since then there havent been computers powerful enough to actually implement the algorithm. Now Stéphane Guillet and colleagues at the University of Toulon in France say they have evidence that “under certain conditions, electrons may naturally behave like a Grover search, looking for defects in a material.” I’m not going to try to simply this accurately, read the article, but super quickly; they tested electrons “walking” paths on triangular and square grids, and the electrons naturally implement Grover searches in doing so. The bigger 🤯 is that the same algorithm would also fit DNA research by Apoorva Patel at the Indian Institute of Science in Bangalore and explain why there are four nucleotide bases and twenty amino acids in DNA (see second to last quote below).

[C]hallenges involved. The first quantum computer capable of implementing it appeared in 1998, but the first scalable version didn’t appear until 2017, and even then it worked with only three qubits. So new ways to implement the algorithm are desperately needed. […]
The team focused on simulating the way a Grover search works for electrons exploring triangular and square grids, but they also included other physically realistic effects, such as defects in the grid in the form of holes, and quantum properties such as interference effects. […]
In other words, if the search processes involved in assembling DNA and proteins is to be as efficient as possible, the number of bases should be four and the number of amino acids should to be 20—exactly as is found. The only caveat is that the searches must be quantum in nature. […]
Since then, an increasing body of evidence has emerged that quantum processes play an important role in a number of biological mechanisms. Photosynthesis, for example, is now thought to be an essentially quantum process.

Asides

On “AI” replacing jobs and humans

How artificial intelligence is not that intelligent and forces us to simplify humans and human activities so they can be fitted into the algorithms. A reminder that investors in Uber (and others) probably grossly underestimated the time between putting their money in and when automation would scrap the drivers, making the business models workable. Looks like it will take a lot longer than they thought, and they will start trying to find other ways to recoup their investments sooner than later. Same for various AI short and medium bets which should have been much longer term.

And it is fun to see all these shitty robots and failing “AI” systems and their ridiculous statements about the world that mostly just say a lot about what kind of data the most privileged engineers on this planet deem to be “good” or “correct”. It’s like watching a small puppy stumbling about. But this puppy has teeth and is willing to use them. […]
The term “Software is eating the world” is old news by now. Software has eaten the world mostly. And it is a very relevant showcase for those who claim that ‘”AI” is now eating the world’: Software didn’t adapt to the world. We adapted the world to software. […]
In order for these systems to work, processes need to be simplified, complexity needs to be stripped.

The deadly hidden risks within the most prominent economic model of climate change

Many climate policies are based on “the most famous economic model of climate change,” created by Professor William Nordhaus of Yale. The article argues that the model is highly imperfect, and based on various choices of simplicity which just don’t reflect the complexity and graveness of possible outcomes. First, his “damage function” is based on projections, which are very unreliable, prompting even Nordhaus to add a 25 percent (!!) “fudge factor” because his source numbers ignore things like “losses from biodiversity, ocean acidification, political reactions,” extreme events and uncertainty. Second, he assumes that all climate mitigation is an economic drain, where they could actually act as boosts to stagnating economies and, you know, save lives. The third, “has to do with uncertainty, which throws his entire style of analysis into question” (see last quote). Ryan Cooper, the author, makes many important points, though my conclusion is simpler; it’s an irresponsible calculation focusing on nice linear conclusions based on neoclassical economic thinking focused only on not hurting the economy too much. Humans and other living creatures matter, and survival for the greater numbers should be at the center of calculations.

Following Nordhaus’ “optimal” path, carbon dioxide concentrations would approach 650 parts per million by the end of the century, or about 2.5 times the pre-industrial figure, and the average atmospheric temperature would be about 3.5 degrees Celsius higher. […]
As the late Harvard economist Martin Weitzman writes, “All damage functions are made up — especially for extreme situations.” […]
Some of the underlying studies do attempt to model various economic sectors from warming, but at bottom the estimates do not show any tipping points because the authors assumed there aren’t any. It’s a rather shaky foundation for a policy model that purports to show the optimal trajectory of climate emissions for all of human society. […]
Yet Nordhaus’ model finds that 12 degrees of warming — when more than half the population would regularly be risking death just by stepping outside — would only cut world GDP by about 34 percent. […]
[C]onsidering the future of climate change involves an “uncertainty explosion.” First, there is the unknown of how much humans will emit over the coming years, which feeds into uncertainty in how earth’s biosphere will deal with all the extra carbon, which feeds into uncertainty over how temperatures will respond to increased carbon dioxide concentration, which feeds into uncertainty over how particular regions will respond (the Arctic, for instance, is warming much faster than the rest of the planet).

Your Futures Thinking Observatory