July 29, 2018 Sentiers
Octonions. Private data and public good. Sortition. Elders. Scifi. Watson. Weeknotes.
I’m very happy to say that starting in August I’ll be helming another newsletter, this one a collaboration with Atelier 10, who publish the excellent Nouveau Projet print magazine. It’s called Repères and you can subscribe here. It’s in French and will be sent monthly, threading much the same topic selection as Sentiers.
Lastly, since finishing up an important contract a few weeks ago, I’m available for work based in idea curation, forecasting and sense making. I work with individuals and organizations wishing to broaden and accelerate their thinking—participate in company strategies relating to products and services, write trend reports and topic briefs, curate events and lead/edit online and print publications. Email me to know more.
The Octonion Math That Could Underpin Physics
Reading this I kept thinking Gibson, Arrival, Liu Cixin, or even The Dispossessed, Shevek and Odonians. From her brilliance, to her name (Cohl Furey) to her look, to her sometimes working on a yoga mat outside, to the sheer strangeness of what she’s working on… but it’s all real. There are four division algebras (“sets of numbers”), including the octonions, the only ones not yet shown to be central to any major theory in physics. Furey’s work is to try to make that fourth connection. I’m not going to try to explain more, the article is relatively easy to follow, for the first part anyway, after that it all goes a bit 😱🤯! (Via Robin Sloan.)
The suspicion, harbored by many physicists and mathematicians over the decades but rarely actively pursued, is that the peculiar panoply of forces and particles that comprise reality spring logically from the properties of eight-dimensional numbers called “octonions.” […]
Furey’s goal is to find the model that, in hindsight, feels inevitable and that includes mass, the Higgs mechanism, gravity and space-time.
Alternative models for private data and AI
Let’s make private data into a public good
I already linked to a price – value piece by Mariana Mazzucato back in No.38, this time she applies her thinking about publicly funded research (and price vs value) to the GAFA and our private data.
“But Google doesn’t give us anything for free. It’s really the other way around—we’re handing over to Google exactly what it needs. […] The bulk of Google’s profits come from selling advertising space and users’ data to firms. Facebook’s and Google’s business models are built on the >commodification of personal data, transforming our friendships, interests, beliefs, and preferences into sellable propositions. […]
And because of network effects, the new gig economy doesn’t spread the wealth so much as concentrate it even more in the hands of a few firms. Like the internal-combustion engine or the QWERTY keyboard, a company that establishes itself as the leader in a market achieves a dominance that becomes self-perpetuating almost automatically. […]
The underlying infrastructure that all these companies rely on was created collectively (via the tax dollars that built the internet), and it also feeds off network effects that are produced collectively. There is indeed no reason why the public’s data should not be owned by a public repository that sells the data to the tech giants, rather than vice versa.
What if people were paid for their data? – Data workers of the world, unite
Starting from a similar vein as above, this one looks at Weyl and Posner’s fascinating idea of reframing Artificial Intelligence as Collective Intelligence, since it’s simply machines learning from our data and our actions, “data provided by humans can thus be seen as a form of labour which powers AI.” (Also, it’s in the Economist!)
He argues that in the age of artificial intelligence, it makes sense to treat data as a form of labour. […]
Similarly, Mr Weyl expects to see the rise of what he calls “data-labour unions”, organisations that serve as gatekeepers of people’s data. Like their predecessors, they will negotiate rates, monitor members’ data work and ensure the quality of their digital output, for instance by keeping reputation scores.
Futures of Power – Algorithmic Power and Democracy
The people at Superflux really do some fascinating work. Case in point, this recent experiment of using sortition (“using stratified random sampling to select citizens by lot to populate assemblies or political positions”) to assemble a diverse group of people to deliberate on the issue of algorithmic power. The article covers how successful (somewhat) they were, how they structured the debate and what the results were.
What was most powerful about this event is that the emergent data were mainly further questions. This is significant, as currently, this is what differentiates algorithmic “intelligence” from human intelligence . Algorithms are good at following processes and giving answers. Human beings, on the other hand, are great at asking questions.
Why We Created the Modern Elder Academy
Chip Conley, formerly of Airbnb, has opened an “academy” for mid-life questions and career re-orientation. Although it feels first world and privileged, his thinking is still based on some good questions about longevity, retirement, and multiple careers.
In an era that prizes digital intelligence, we believe the need for wisdom, emotional intelligence, and the ability to collaborate and coach is greater than ever. The Modern Elder Academy was created to help people in midlife to repurpose their knowledge and embrace their mastery while appreciating the roles of both a wisdom keeper and seeker.
Science Fiction Is Not Social Reality
S. A. Applin, Ph.D. making the case that so much of technological innovation is conceptually and philosophically based on science fiction that it causes a lot of problems of readiness and not enough thinking. Basically, people tend to assume things will work because they did in the book / movie. Forgetting that it’s… fiction. Make sure to not just send this one to Instapaper or Pocket, open it in a browser and click through the links, the article is packed with examples and references.
Companies that have invested and invented technologies based on mythology set in a mythical future, are trying to realize these, now in the present, within a society that hasn’t yet evolved to adapt to them and maybe never will. […]
But all of them in aggregate in various hardware and software forms, should be very worrying to us. We have allowed a wealthy consolidated power base (tech billionaires and the companies they control) to build our present, and our future, based in part on fantasized stories of a mythical future. […]
These are untested. These are unrealized. These are being deployed on a massive scale.
++ Why the Color of Technology Must Change
Amber Case on the problems of blue light and the impact scifi’s description of the future has had on our favouring of it in actual products vs the more practical choice of orange and red light.
Blue light inhibits the production of melatonin, the hormone that regulates our sleep cycles. […]
Ridley Scott’s depiction of the future was believable, compelling, and most of all, dark — both figuratively and literally.The blue light from pervasive display screens depicted in the movie fit its shadowy film noir aesthetic, and inadvertently became one of the core tenants in our default mental image of “what the future looks like”.
“In the long term, we want there to be a consortium of industry leaders, consumer groups, government groups,” says Fair. “But until we have a reasonable critical mass, it’s not an interesting conversation.”
++ IBM Watson Reportedly Recommended Cancer Treatments That Were ‘Unsafe and Incorrect’
Another example showing that much of AI is just not ready for real world use and a surprising casualness with data selection, considering it’s what the whole thing is based on.
The documents state that—instead of feeding real patient data into the software—the doctors were reportedly feeding Watson hypothetical patients data, or “synthetic” case data. This would mean it’s possible that when other hospitals used the MSK-trained Watson for Oncology, doctors were receiving treatment recommendations guided by MSK doctors’ treatment preferences, instead of an AI interpretation of actual patient data.
(Now referring to planet, environment, cities, politics, and society.)
xkcd: Earth Temperature Timeline
Spoiler: it doesn’t end well.
++ The case for building $1,500 parks
A study shows how very little investment in making a vacant lot greener can have an impact on the mental health of people visually exposed to the park.
Psychological Distress Scale survey before and after the greenings, revealing a 40% reduction in feeling depressed and a 50% reduction in feeling worthless. The impact was even more pronounced in participants living below the poverty line.
New York City council members voted to require the home-rental company to hand over the names and addresses of its hosts in the city. The officials have said they need the information to police Airbnb hosts that operate illegally and drive up neighborhood rents. Airbnb has said the bill represents an unreasonable violation of users’ privacy.
++ How Manhattan’s Grid Created the Prettiest Mosaic Ever Made.
The history of the NYC street grid with many lovely details. Make sure to get to the bolts and pizza slice bits.
A pre-history of weeknotes, plus why I write them and perhaps why you should too
Pardon me if I delve once again in the BERG scenius but this is a good recap on the practice of writing weeknotes, for those not aware of the concept and/or looking for good reads. (It is a bit inside baseball.)
++ The True Story Of A Man-Eating Tiger’s ‘Vengeance’
Following from last week’s “things that read like movie scripts.”
The injured tiger hunted Markov down in a way that appears to be chillingly premeditated. The tiger staked out Markov’s cabin, systematically destroyed anything that had Markov’s scent on it, and then waited by the front door for Markov to come home.