This week → Ecological crises and equitable futures ⊗ AI is an ideology, not a technology ⊗ Why don’t we just ban targeted advertising? ⊗ That feeling of being there
A year ago → Could We Blow Up the Internet?.
Excellent piece offering “a diagnosis of dominant narratives of possible ecological futures and what they mean for us.” Namely; extinction (as in XR), ecological apocalypse (Malthus), technological solutionism (Musk), and degrowth (Raworth). The author is definitely on the degrowth side and some points can be made against a few of his criticisms of the other futures but overall a great overview and valid critique of some, and strong case for the other. Also of interest because it’s from a “sectoral left think tank in Aotearoa New Zealand” so, although it’s valid for everyone, it’s written from and for Kiwis, which is a less common angle.
TL;DR: the degrowth vision is the only truly wholistic vision aiming for the wellbeing of people around the globe, not one subset or the other.
This approach, which envisages ecological crises as a business opportunity rather than as a fundamental crisis of capitalist socio-economic organisation is largely wishful thinking. Even when technological solutionism is posited on the anti-capitalist left, it tends towards utopian thinking which fails to meaningfully engage with the material reality of ecological crises. […]
If the discourse of the Anthropocene problematically homogenises humans in order to distribute blame equally for ecological crises which are overwhelmingly the result of activities by certain groups of economically privileged humans, discourses surrounding overpopulation predominantly criticise those who contribute the least to these crises. […]
[T]he ideology of technological solutionism remains a prominent fantasy that purportedly fixes Anthropocenic ecological crises. Such claims rely upon the aberrant associations that digital technologies are green, smart or immaterial. […]
While capitalism has historically relied upon the enclosure of commons and the artificial production of scarcity, the degrowth model seeks to promote commons and public ownership in order to manage resources in an ecologically responsible manner whilst also promoting more equal societies. […]
Degrowth, however, should not be understood as a contraction of the existing economic system, but as a transition to an altogether different, post-capitalist economy where ‘wealth’ is understood differently to current measures of GDP or GDP per capita.
Glen Weyl and Jaron Lanier on other ways to envision AI, instead of the current ideology. They look at humanist and pluralist visions, keeping the human involvement / part in AI’s “intelligence” and valuing that contribution—including the providers of data, not just the engineers. To their mind (and I tend to agree), the current IA ideology is that technologies built by an elite aim to replace humans, instead of complement them. I’d also make a parallel with the use of “magic” as a form of obfuscation to hide the work done for free or cheaply. See also fauxtomation, haunted machines, and “jobs below the API.”
A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There’s always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans. […]
The very idea of AI might create a diversion that makes it easier for a small group of technologists and investors to claim all rewards from a widely distributed effort. Computation is an essential technology, but the AI way of thinking about it can be murky and dysfunctional. […]
“AI” is best understood as a political and social ideology rather than as a basket of algorithms. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity. […]
Driven neither by pseudo-capitalism based on barter nor by state planning, Taiwan’s citizens have built a culture of agency over their technologies through civic participation and collective organization, something we are starting to see emerge in Europe and the US through movements like data cooperatives. […]
The active engagement of a wide range of citizens in creating technologies and data systems, through a variety of collective organizations offers an attractive alternative worldview.
Overview of the multiple reasons why banning targeted ads would be good, simpler than dealing with privacy issues, and excellent for politics and democracy. An especially alluring option, since the only thing really “lost” would be more precise ads. (The author does look at what might be lost, or not, revenue wise for publishers.)
Instead of trying to clean up all these messes one by one, the logic goes, why not just remove the underlying financial incentive? Targeting ads based on individual user data didn’t even really exist until the past decade. (Indeed, Google still makes many billions of dollars from ads tied to search terms, which aren’t user-specific.) What if companies simply weren’t allowed to do it anymore? […]
“I honestly believe we are not going to solve any of the problems that we’re worried about, like election interference and disinformation, unless we ban targeted advertising.” […]
[Zephyr Teachout] co-authored a paper arguing that the dominant internet platforms should be treated as public utilities and prohibited from using behavioral ads. If telephone utilities aren’t allowed to eavesdrop on our conversations and sell the details to marketers, then Amazon or YouTube shouldn’t be able to do the same with our browsing history. […]
Even a subscription-based social network would want to engage its users, he said, and what engages users is sensationalism and filter bubbles. “I do not think it is enough to address the damage of microtargeting if you don’t also deal with algorithmic amplification,” McNamee told me.
I’d like to draw your attention to the last section of this recent newsletter issue by Dan Hon, where he looks at online communities as they relate to recreating conference experiences online. He’s pointing at a few interesting things, and the current situation means this should be a fast evolving field to pay attention to. Related: just today a group I was in suffered a ‘Zoombombing’ and, you know, that’s why we can’t have nice things.
“Online community” software stagnated ever since the social software boom, where community pivoted from small groups to scaleable management of large groups and also all the money people came in. […]
I keep going on about Fortnite being one of the best predecessors to some sort of shared virtual world at scale and I will continue to hold that belief until something better comes along. This is because Fortnite has both purpose and presence – that it’s a place but also a reason for you to be in that place. […]
Now, I haven’t been to any of the Fortnite gigs, so I should go and actually do that. But this is what I imagine: you can go there with both your friends and other people. You can talk about it with your friends at the same time. You can also see other people and your friends. You can move around. You can emote, to a degree. This sounds.. pretty good?
How Silicon Valley doesn’t believe in history, and is not the first world changing industry to do so.
“On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” Wiping the slate clean with the digital era paved the way for the kind of ignorance techno-utopian narratives traffic in. […]
“I think history leads you to be a bullshit detector,” Vinsel said. He supposes this may be the fundamental incompatibility between tech companies, which disseminate an awful lot of bullshit, and their disdain for an honest reading of history. Perhaps, he thought, they might see a little too much of it in themselves. After all, Vinsel added, “there’s not a lot of innovation in bullshit.”
- ?? Society Centered Design. “We must advocate for civic value, equity, the common good, public health, and the planet. We need a new framework for design and data that is purpose-built for the 21st century. We want to move beyond human-centered design to society-centered design. We must design for the collective. We must design for society.”
- ? I’m reducing my virus news exposure right now so I’m not reading it for the moment but this Ed Yong piece has been highly recommended a few places: How Will the Coronavirus End?
- ??♂️ For the full life experience, put down all devices and walk. “The art of walking is antithetical to ‘screening’ the world we live in, and there is no pre-programmed set of rules or calculations involved. Walking, simply for the sake of a walk, can be a brief respite in our otherwise frenetic lives, allowing us to detach so we might see life for ourselves again, not unlike a child does.”
- ??????? Artificial islands older than Stonehenge stump scientists. “A study of crannogs in Scotland’s Outer Hebrides reveals some were built more than 3,000 years earlier than previously thought. But what purpose did they serve?”
- ? ? 25 Photos of Madeira’s Dreamy Fanal Forest by Albert Dros. “[O]ne of Madeira’s most unique treasures is an ancient forest where every step takes you inside a scene ripped from a fairy tale. Known for its enchanting morning fog, the Fanal forest is part of an ancient laurel forest.”
- ? What It Looks Like From Space When Everything Stops. “The disruptions are playing out on every possible scale, from individual lives to businesses, nations, and even phenomena visible from space. Images from satellite company Planet Labs Inc.’s SkySat imaging orbiters show how abrupt and total the cessation of human activity has been.”
- Other Seas/Other Suns: An Interview with Matt Griffin. I haven’t read the interview, linking here for the superb illustration work.
- Warner Bros. Will Now Use AI to Help Decide Which Movies to Make. “The new AI technology will be used by Warner Bros. in the greenlight stage, where executives attempt to look at a bunch of different data and decide if a film will be profitable for them or not. With the Cinelytic AI, the goal is to have more precise data so that Warner Bros. can better engage with audiences. The new AI system is supposed to help create an estimate on a film’s expected earnings.”
Header image: Yes, I did go the way of the “social isolation” picture, sorry.