“Technology is neither good nor bad; nor is it neutral.”
There are a number of important points in this piece from the Real Life Magazine newsletter. But the quote above needs to be kept in mind. The article is largely written as if the negative aspects and impacts of AI are done with nefarious purposes. They are, but mostly as it pertains to making money or with a blatant disregard for humans. To me, doing borderline evil sh/t because you don’t care is not exactly the same as trying to destroy the world.
If you read it while replacing ‘AI’ with ‘AI created by a corporation, using data harvested for their purpose, within neoliberalism,’ then it’s a much more useful piece, imho. Everything written in there is true, but it’s not the whole truth. It’s not AI ‘as it has to be,’ it’s AI as it’s currently leveraged. (See BLOOM further down, for a different example.)
Algorithmic systems don’t simply start with data and “the facts,” as though these are just lying around, unwarped by the intentions of whoever was in a position to gather them. “The grammar of prediction tends to prefer particular kinds of data and relations over others: a preference driven not purely on epistemic grounds, but by economic and institutional conditions that make some datasets and problems more available than others,” Hong notes. “Reality” doesn’t automatically translate into some “correct” data set. It always exceeds what can be measured, and what is measured becomes a kind of argument for a particular understanding of reality. […]
Prediction is as much a way of thinking and talking about how we make facts, and who declares those facts about whom, as it is a set of calculative techniques. The struggle over these concepts – and the moral attitudes, affective orientations, and other pictures of the world embedded into them – shape our perceptions of what kind of technological arrangement is ‘inevitable’, or what kinds of reform, abolition, and alternatives are considered ‘plausible’.