Really really good interview with the AI Now Institute founders Kate Crawford and Meredith Whittaker, covering many of the worries and problems around AI and ethics. Notice how they always circle back to data sources and whether we are asking the right questions. Nothing massively new but it recaps and grasps the scope of the problem—as can be understood from the large number of quotes below (!!). You might also want to watch the Screening Surveillance videos further down right after reading this, perfect match.
[I]n the end, you’re talking about cultures of data production and if that data is historical, then you are importing the historical biases of the past into the tools of the future. […]
“Okay, how do we think about data construction practices? How do we think about how we represent the world and the politics of AI?” Because these systems are political, they’re not neutral. They’re not objective. They are actually made by people in rooms. And that’s why it matters who’s in the room, who’s making the system, and what types of problems they’re trying to solve. […]
[H]ow are you measuring benefit? And that’s one of the key areas I think we need to look at more closely, right? So in increasing crop yield, that might be a huge benefit, but is that coming at the expense of soil health? Is that coming at the expense of broader ecological concerns? Is that displacing communities that used to live on that land? I’m sort of making up these examples as questions you’d want to ask before you sort of claim blanket benefits from these technologies. […]
These issues are serious and they’re being taken seriously. But what we don’t see is real accountability. What we don’t see are mechanisms of oversight that actually bring the people who are most at risk of harm into the room to help shape these decisions. […]
They’re having different conversations about privacy that realize that it’s not just about individual privacy, it’s about our collective privacy. It’s the fact that, if you make a decision in a social media network, that can affect how data from all of your contacts is being extracted, as well. I think there’s an increasing level of literacy, and that’s something that’s super important. […]
The US has many similar systems that are either in place or about to be in place in the next couple of years. I’m sure you read the news that, for example, in New York, insurers have been given full permission to look at your social media to decide how to modulate your insurance rates. That sounds very similar to the sorts of things that we’re concerned about in China. […]
[🔥] I’d say this field has worshiped at the altar of the technical for the better part of 60 years, and at the expense of understanding the social and the ethical. We’re seeing the fruits of that prioritization. […]
Meredith Whittaker: I think he’s [Elon Musk] wrong across the board. I think the premise is faulty, but it is a great distraction from the very real harms of faulty, broken, imperfect, profitable systems that are being mundanely and obscurely threaded through our social and economic systems.
Kate Crawford: We’ve called this the apex predator problem, which if you’re already an apex predator and you have all the money and all the power in the world, what’s the next thing to worry about? “Oh, I know! Super-intelligent machines, that’s the next threat to me.” But if you’re not an apex predator, if you’re one of us, we’ve got real problems with the systems that are already deployed, so maybe let’s focus on that.