This week → Decision trees ⊗ R2D2 as a model for AI collaboration ⊗ Civic logic: social media with opinion and purpose ⊗ The way we train AI is fundamentally flawed ⊗ Bolivia’s new Vice President David Choquehuanca called the brotherhood to unity ⊗ Where the teens are hanging out in quarantine
A year ago → The most clicked link in issue No.107 was An introduction to metamodernism: the cultural philosophy of the digital age by Anne-Laure Le Cunff.
Some quick house keeping: Thanks a lot to everyone who has answered last week’s poll, close to 300 answers should be pretty representative. It’s still open, probably 20 seconds for three one click questions.
GoodThingsFest, ThingsCon’s annual gathering 2020 is in two weeks and I’ll be one of the people doing an AMA on Friday (Ask Me Anything), do join us! My Learning Collection booklet is available for purchase and free to members, and finally this week I posted On projects, newsletters, products, and formats, which gives a bit of insight on the coming months and some speculation on where micro-media might be going.
Subscribe Enjoyed this issue? Get the weekly Sentiers on technology in society, signals of change, and prospective futures.
Beautiful speech by the new Bolivian VP, mixing indigenous peoples’ history, beliefs, and cultures with present challenges, the potential for collaboration, unity, respect, and a renewed emphasis on the common good.
It requires that we be free and balanced individuals to build harmonious relationships with others and with our environment, it is urgent that we be able to maintain balance for ourselves and for the community. […]
We will return to our Qhapak Ñan, the noble path of integration, the path of truth, the path of brotherhood, the path of unity, the path of respect for our authorities, our sisters, the path of respect for fire, the path of respect for the rain, the path of respect for our mountains, the path of respect for our rivers, the path of respect for our mother earth, the path of respect for the sovereignty of our peoples. […]
Now no more abuse of power. Power has to be to help. Power has to circulate. Like the economy, power has to be redistributed. It has to circulate. It has to flow, just as blood flows within our body. Now no more impunity, justice brothers.
Jason Rhys Parry presenting an overview of a number of scenarios; from architect Bradley Cantrell and his co-authors, to artists Tega Brain, Julian Oliver, and Bengt Sjölén, on to theorist Benjamin Bratton. The various theories explored offer different visions of how environmentalism might be automated, circumventing politicians who have done so very little over the last 50 years. Going from a sci-fi-like “fully automated luxury primitivism” where an AI makes decisions and enacts them at scale, to a relatively plausible loop between sensors in an ecosystem finding agency through a legal algorithm enacting their rights (referencing autonomous corporations and legal precedents), the author provides a very intriguing thought experiment where algorithms meet ecosystems.
Beyond collecting climate data, environmental sensors pick up the signs of political paralysis and corruption. […]
Bratton posits is a synthesis: an emerging artificial intelligence that is bent not on monomaniacally murdering humans but on modeling planetary systems and mastering the subtleties of as-yet-undiscovered forms of biosemiotics. […]
Under the current regulations on corporate charters, Lopucki claims, algorithms can legally own property, enter into contracts, seek legal counsel, and even spend money on political campaigns. […]
As far-fetched as these scenarios might sound, many of the legal and some of the technological requirements for their realization are already being put in place. Entities like terra0 could provide a model of how automated environmental management could expand a kind of agency to ecosystems threatened by the Anthropocene. […]
Granting programs trained by nonhuman entities the capacity to generate policy recommendations, file lawsuits, and organize petitions could result in a formidable challenge to a political economy predicated on the denial of ecological agency.
- 📡 😭 🇵🇷 Arecibo Observatory, a Great Eye on the Cosmos, Is Going Dark. “One of its directors was astronomer Frank Drake, then at Cornell, now retired from the University of California, Santa Cruz. He was famous for first pointing a radio telescope at another star for indications of friendly aliens, then for an equation, still in use today, that tries to predict how many of “them” are out there.”
- 🤯 😳 These are getting good!! (However, please don’t say "mind of a computer.” Ever.) Do These A.I.-Created Fake People Look Real to You? “Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.”
- 🦠 🇺🇸 The Moral Calculus of COVID-19. “This is contrarianism on a scale not usually seen in a newspaper article. (They’re usually too short to take this many turns.) It is one thing to counter received wisdom by posing a counterfactual. It is another to spend hours of reporting, gathering facts, calling in experts, putting everything on the record, and then deciding that none of that matters.”
- 🌼 🧮 Artificial Blooms: Digital Botanics Showcase the Fractals, Tessellations, and Repetitive Features of the Natural World. “The digital renderings showcase the complexity of organic structures while also highlighting the fractals and endless intricacies inherent to nature’s designs. … [I]t was fascinating to find that a lot of floral and plant structures follow certain mathematical rules, which we could replicate and apply to our own structures.’”
- 🇩🇪 🐻 Berlin’s Second-Hand Craze Is Turning It into a ‘Zero-Waste City’. “‘Shopping and Saving the World,’ reads the sign at the entrance. Beyond is a 7,000-square-foot sales area displaying all manner of upcycled goods: sparkling beer glasses; CDs, records and books; an overhauled vacuum cleaner; tables and beds made of discarded wood from Berlin construction sites; children’s toys; hi-fi systems, and rows and rows of textiles.”
- 6 Sci-Fi Writers Imagine the Beguiling, Troubling Future of Work. “How are our impulses to fear, to hope, and to wonder built into the root directories of our tech? Will we become more machine-like, or realize the humanity in the algorithm? Will our answers fall somewhere in symbiotic in-between spaces yet unrealized?”
- 👽 🇺🇸 Aliens or Artists? Metallic Monolith Discovered in Rural Utah. “We’re not saying it’s definitely aliens, but it’s definitely aliens. We’re ready to go home, aliens, but be careful when you come back to Earth. We wouldn’t want you to get covid-19, a disease that has so far afflicted over 59 million humans and killed 1.39 million on this stupid planet of ours. On second thought, just vaporize us if you need to.”
Excellent piece by Alexis Lloyd on AI / automation / bots / robots, how we design relationships with them, and the three archetypes she identified for different approaches to designing machine intelligence; C3PO, Iron Man, & R2D2. Notable ideas include an axis where as the “distance between the person and technology increases, you start to get deeper into questions of relationships and communication.” Lloyd’s hypothesis that the anthropomorphic model for robots is a skeuomorph because “we haven’t developed new constructs for machine intelligence yet.” (I love that!) And I always tended to consider R2D2 weird since it wasn’t designed with voice but here she frames it as having its own language, “he doesn’t try to emulate human language; he converses in a way that is expressive to humans, but native to his own mechanic processes.”
Augmentation and collaboration with algorithms and various machines is definitely a topic to keep close attention to and this piece gives a useful way to interpret them.
We need to stop trying to make machines be like people and find some more interesting constructs for how to think about these entities. […]
It’s fascinating to watch how prosthetic technologies often begin as assistive, as adaptations for disability, and then get repurposed (and remarketed) as augmentation. […]
[W]hen we get to the end of the spectrum where the machine is not only separate from the self but also has agency — it has ways of learning and rubrics for making its own decisions. […]
As we design interactions with these kinds of machine intelligences, what are their versions of R2D2’s language? What expressions feel native to their processes? What unique insights can we gain from the computational gaze? […]
Let’s not let the future of AI be weird customer service bots and creepy uncanny-valley humanoids. Those are the things people make because they don’t have the new mental models in place yet. They are the skeuomorphs for AI; they are the radio scripts we’re reading into television cameras.
Ethan Zuckerman and Chand Rajendra-Nicolucci look at a number of social networks ordered around civic logics, their rules, success vs their goals, potential revenue models, and other avenues for reaching those goals, including a consideration of how some Reddit boards can be good examples of civic spaces and moderation. (We can also connect this to the “dark forest of the internet” trend of people drifting to smaller online spaces.)
But that’s really the point of networks that operate on civic logics. They’re not for everyone, not for every use case, but they provide critically important spaces for conversations that are difficult to hold elsewhere, and which make us richer and more resilient as a society. […]
vTaiwan uses upvotes and downvotes on posts to generate a map of the debate, creating clusters of people who voted similarly. The clusters show where there are divides and where there is consensus. People then try to draft comments that win upvotes from both sides of a divide, bringing them closer together. […]
But survival is a low bar to clear. The exciting possible future for civic-logic networks is that they become regarded as public goods—aspects of our social infrastructure that are so important that we choose to support them through taxpayer dollars or through community giving, the way we support libraries and public parks.
During the pandemic, kids of all ages have fled their parents and flocked to online communities and spaces like Discord, Roblox, Minecraft, and Among Us. The piece presents some stats, examples, and insights from Mimi Ito, director of the Connected Learning Lab at the University of California. Another one to connect to “dark forest internet,” as well as to the influence of gaming culture and tools on the rest of society, two recurring themes here, as you know.
“Even in the first few months [of quarantine] there was all this writing by reporters who had been anti-screen time: ‘We have given up, The kids have won’,” Ito says. But in fact, the pandemic “is finally giving adults a window into the fact that these are real relationships. […]
Public libraries across the country have also moved once in-person programming online, using Discord as well as Zoom and Google Hangout for meetings. The Coeur d’Alene Public Library in Idaho set up private Discord servers for Dungeons and Dragons and video gaming; the Peters Township Public Library in Pennsylvania created a digital escape room with a Harry Potter theme with Google Forms, reported American Libraries.
If you want to get a good idea (or new proof) of the kind of early-stage (half-baked?) AI that’s being created and put to use in society right now, this is a great one. The problem known as data shift (mismatch between testing and real-life data) is already well known, now 40 researchers at Google are finding that underspecification is a big issue. In something bringing to mind the discussions around black boxes, they’ve found that a lot of models can test successfully but then prove completely ineffectual or vary greatly in performance in the real world. Sometimes simply because the random training data was slightly different. Unsurprising conclusion: they need to test a lot more and get better at specifying requirements ahead of time. “D’uh!” Comes to mind.
The training process can produce many different models that all pass the test but—and this is the crucial part—these models will differ in small, arbitrary ways, depending on things like the random values given to the nodes in a neural network before training starts, the way training data is selected or represented, the number of training runs, and so on.
In other words, the process used to build most machine-learning models today cannot tell which models will work in the real world and which ones won’t.
Subscribe Enjoyed this issue? Get the weekly Sentiers on technology in society, signals of change, and prospective futures.