Want to understand the world & imagine better futures?
Also this week → What happens when the computers disappear? ⊗ What’s the point of reading writing by humans? ⊗ Cory Doctorow on the Future Now podcast ⊗ Naive Yearly
Why an easier life is not necessarily happier
I’m not sure if this is to be filed under ‘history doesn’t repeat itself but it echos’ or under ‘sometimes it takes a while’ but this one by L. M. Sacasas definitely gives us some (recent) historical perspective on technology and AI. He’s republishing an essay, slightly revised, which he wrote in 2014. He first recaps three articles by Tim Wu and then jumps off from there to the work of Albert Borgmann, in honour of who’s passing he’s republishing the post.
“In this last post, Wu suggests that ‘the use of demanding technologies may actually be important to the future of the human race.’” Wu also talks about “technologies of convenience,” which Sacasas connects to Borgmann’s “focal practices” and the “device paradigm.” There’s quite a bit of good stuff in there, I encourage you to read the whole thing, but I’d like to focus on that duality and in turn connect it to danah boyd’s piece I shared two weeks ago.
boyd wrote about how AI might deskill workers by removing some important learning stages and also leaving us with only monitoring duties. To really simplify all of the above, for us to truly master and benefit from our technologies, they need to be demanding and to push our skills; when everything becomes easy and effortless, frictionless, we lose something. Which makes me think that if Superintelligence is to ruin humanity, it will likely be not through some nefarious intent, but simply by atrophying our brains. As Wu mentions, hello humans of Wall·E.
Wu goes on to draw a distinction between demanding technologies and technologies of convenience. Demanding technologies are characterized by the following: “technology that takes time to master, whose usage is highly occupying, and whose operation includes some real risk of failure.” Convenience technologies, on the other hand, “require little concentrated effort and yield predictable results.” […]
“The problem is that, as every individual task becomes easier, we demand much more of both ourselves and others. Instead of fewer difficult tasks (writing several long letters) we are left with a larger volume of small tasks (writing hundreds of e-mails). We have become plagued by a tyranny of tiny tasks, individually simple but collectively oppressive.” […]
There is a point at which the gains made by technology stop yielding meaningful satisfaction. […]
If we seek to remove all trouble or risk from our lives; if we always opt for convenience, efficiency, and ease—if, in other words, we aim indiscriminately at the frictionless life—then we simultaneously rob ourselves of the real satisfactions and pleasures that enhance and enrich our lives.
What A.I. risks we should really be worried about
Great interview with Meredith Whittaker, president of Signal but, more importantly, co-founder of the AI Now Institute at NYU. They discuss the “existential threat” of AI as Bogeymanned by Geoffrey Hinton, Melon Usk, and others as well as where the data comes from, the issue with only very few players dominating this technology, inequality, surveillance, and more. Well explained and important but largely stuff we’ve covered here before.
The unique part that makes this worth a read is how Whittaker argues that the existential threat most talked about is the one feared by the very most privileged among us. For “low-wage workers, people who are historically marginalized, Black people, women, disabled people, people in countries that are on the cusp of climate catastrophe,” AI is already an existential risk, their livelihood is already threatened. Whether it’s by being even more surveilled, discriminated against, criminalised, or ‘simply’ losing a job, the threat is present. It’s another example of Gibson’s over-quoted “the future is here, it’s just unevenly distributed” or “every utopia is someone else’s dystopia.” Forget the far future, AI’s existential threat is already here, it’s unevenly distributed, and it’s numerous people’s dystopia.
I’m not sure if I’ve already mentioned it but it’s also uncanny how much the proponents of ‘AI will kills us all in the future’ are white guys and how much the ‘clear and present actual danger’ group is intersectional, especially younger women of colour. If you pay attention, it’s really striking. And one hundred percent parallels and supports Whittaker’s argument above.
Part of the narrative of inevitability has been built through a sleight of hand that for many years has conflated the products that are being created by these corporations—email, blogging, search—with scientific progress. […]
My concern is that if we wait for an existential threat that also includes the most privileged person in the entire world, we are implicitly saying—maybe not out loud, but the structure of that argument is—that the threats to people who are minoritized and harmed now don’t matter until they matter for that most privileged person in the world. […]
We know where they live, we know where their data centers are, and it is eminently possible to put these technologies in check if there’s a will. So it’s not out of control, it is not out of our hands.
More → Along the lines of the first quote, see Chiang’s piece from last week.
Google I/O and the coming AI battles → Ben Thompson reviews the event, considers it in the context of their competitors, and then goes on a kind of weird but intriguing parallel with the printing press and its impact on religion, politics, all the way to the Westphalian system. He quotes himself a lot and history buffs might argue some of his points, but it leads him to wonder about the quote below, which I think is worth paying attention to. Big centralised players are easier to use as chokepoints, will it favour open models? (Which I’d favour for other reasons.)
In this view these proposed E.U. regulations are simply the first salvo in what may be the defining war of the digital era: will centralized — and thus controllable — entities win, or will there be a flowering on the fringe of open models that truly explore the potential of AI, for better or for worse.
Cory Doctorow on the Future Now podcast on Chokepoint Capitalism → Speaking of chokepoints, good interview with Doctorow, which I’m sharing here for the part I linked to where he explains that creations by generative SALAMI systems (AI) have been ruled as not eligible to copyright (except narrow exceptions), and that’s a good thing. Artists shouldn’t ask for the right to protect their material from training data, because then the big firms controlling the markets (music, movies, books, etc.) would immediately force them into signing away that right and would then use those models to automate-out the artists. If SALAMI results remain non-copyrightable, then big companies like Disney won’t be able to automate their whole production chain, they’ll want to keep employing humans so that they can keep the rights. To ponder.
What happens when the computers disappear?
Christopher Butler has been on a post-writing binge for a little while, so I have no other choice but to go on a sharing-his-posts binge of my own. On wearables, computers inside the body, when we become cyborgs or how long we’ve been cyborgs, Haraway, post-humanity, modifying our bodies for climate change, fragility, and more.
We may have already irreparably changed the ecosystem that was once natural to us. Our ability to continue to thrive here might require intervention. The question again is, do we change the environment or do we change us? […]
Civilization has been built upon self-sufficient technological material. What happens when more and more of its future is determined by things that are exponentially more complex and fragile? […]
Thinking > Making > Living is a perpetual feedback loop. There is no technology without the thought that precedes it, but living with technology changes how we think.
What’s the point of reading writing by humans? → “I’ve come to realize that I function like a more curated but less efficient version of GPT. My sentences are not generated by A.I., but they are largely the synthesis of my favorite authors.”
- 😍 🗓 📧 🇩🇰I looove everything about this—I mean, even the partners are cool. Including references is a nice touch. Naive Yearly. “Naive Yearly gathers people who expand what the web is and can be. It is an extension of Naive Weekly, made in collaboration with The National Film School of Denmark and supported by Bikubenfonden. When the sun sets, we party at SPACE10 and when the leaves turn red and yellow, the talks are published in partnership with Are.na. The gathering returns yearly for as long as relevant.”
- 😍 📸 🎥 🇨🇦 ⚜️ Out-Ingelsing Bjarke Ingels fifty-plus years earlier. Exploring Hillside: a new vision of Habitat 67. “Today, Safdie’s original vision can be explored in all its (virtual) glory with Unreal Engine, providing the next generation of architects, engineers, and urban planners an interactive reference point for creating a better tomorrow.”
- 🤩 🤖 🇸🇴 🇮🇳 🇲🇽 👑 🇬🇧 Generative AI allows TikTok creators to reimagine history. “A viral trend imagines alternate timelines in which Western imperial nations never came to power.” Also on the GenAI beat and having a bit of a laugh at the expense of the colonisers, these generated pictures of the Coronation after party are fantastic.
- 🤩 🏢 🤖 Fantastically Detailed Retro Futuristic Illustrations. “We’ve always loved the retro-futuristic skylines that included flying cars, bubbly-skyscrapers, and optimistic visions of the future. … We’ve explored our own generated illustrations, with the help of artificial intelligence, creating complex and highly detailed visions of an alternate reality.”
- 😑 🌳 This emoji is ‘expressionless face’ but let’s use if for my ‘not surprised, yet pissed face.’ Saving the forests won’t be enough to stop climate change — we need substantial emission cuts “Many companies, from airlines to Big Tech, are betting on forest carbon capture to reach their net-zero pledges, but a new study shows this approach is flawed at its core.”
- 🤬 🛢 🇸🇦 🇷🇺 The top two are Saudi Aramco from Saudi Arabia and Gazprom from Russia so good luck getting any of that money. Fossil fuel firms owe climate reparations of $209bn a year, says study. “BP, Shell, ExxonMobil, Total, Saudi Arabia’s state oil company and Chevron are among the largest 21 polluters responsible for $5.4tn (£4.3tn) in drought, wildfires, sea level rise, and melting glaciers among other climate catastrophes expected between 2025 and 2050, according to groundbreaking analysis published in the journal One Earth.”
- 🎥 🤯 🚢 Titanic: First ever full-sized scans reveal wreck as never seen before. “The first full-sized digital scan of the Titanic, which lies 3,800m (12,500ft) down in the Atlantic, has been created using deep-sea mapping. It provides a unique 3D view of the entire ship, enabling it to be seen as if the water has been drained away.”
Join thousands of generalists and broad thinkers.
Each issue of the weekly features a selection of articles with thoughtful commentary on technology, society, culture, and potential futures.