Indoxicate 🧠


â–¶ listen to this post

AI Risk and Capitalism

How people seem suddenly worried about ‘Artificial Intelligence’! It’s as if massive threats to human flourishing suddenly become acceptable if they’re cushioned by a soothing cloud of sci-fi speculation. Well, let me tell you, there’s a much bigger problem, and the current path of AI largely owes its existence to it. It’s capitalism that is the overarching threat we should be discussing.

So, what happened? A large number of people working in ‘Artificial Intelligence’ (AI) have begun to see that AI might actually turn out very dangerous. If left unchecked, it could spread uncontrollably to wipe out humanity, or at least make our lives extremely miserable. Former Google engineer Geoffrey Hinton resigned in part because of this realisation, and he thinks that curtailing the development and use of dangerous AI may require international agreements like the bans on chemical weapons. A few days ago a list of people—including some philosopher colleagues of mine—published an open letter in which they argue that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

It all sounds terribly urgent all of a sudden, doesn’t it?

Whether these fears about AI are overblown is a moot point. It’s not even clear how ‘I’ AI is. Is it really intelligence we’re dealing with or just a clever trick that these programs play with our minds?

As I’ve seen it described, the GPT-style AI people are currently raving about is a sophisticated form of autocomplete. It predicts what answers people expect to be given. Yes, GPT and its friends seem rather good at passing the Turing test, which means that GPT’s responses to prompts can be indistinguishable from a human response. But we should keep in mind that the development of AI over the last decades seems to have got caught by Goodhart’s law. This law says that “when a measure becomes a target, it ceases to be a good measure”. And the Turing test has for a long time been used as a measure of machine intelligence. Arguably, AI research since the 1980s has done exactly what Goodhart warned against: turning a measure into a target. The race for AI has aimed to make machines that pass the Turing test. This suggests that, according to Goodhart’s law, the Turing test, as a measure of intelligence of these AI systems, may well be junk. If that is right, then based on its responses we really can’t tell whether GPT is intelligent or not.

But I don’t actually want to get into this discussion about intelligence. I want to talk about the sudden concern about societal risk. Even if the concerns about AI are justified, it strikes me as decidedly odd—and somewhat suspicious—that people suddenly care about it.

Why should the risk of extinction from AI suddenly feel urgent, and be given priority alongside other societal-scale risks such as pandemics and nuclear war? Because we don’t really care much about these other societal-scale risks either, do we?

For one thing, we’re in the middle of a pandemic that has killed tens of millions of people and has disabled at least ten times as many. We’ve just officially ceased to give that risk any priority. The only broadly publicised open letter on the ‘risk of the pandemic’ I can remember is the Great Barrington Declaration, which argued precisely the opposite of mitigating societal risk: it argued that mitigating the risk of pandemics harms society, and that we should allow those at risk of pandemic viruses simply to die. And that ended up being the prevailing pandemic policy! So why do people suddenly care about the prospects of loads of people being killed?

I’m being cynical, of course. But I do think it’s striking that public consciousness is suddenly gripped by a concern for a mass die-off if it’s a group of millionaires that is spearheading these worries.

My view is that, if you are truly concerned about existential risks, and genuinely want to mitigate such risks, then a focus on AI seems to me an odd thing to have right now. This is because capitalism, the supreme existential risk that lies behind all of the aforementioned risks, is not even a speculative risk or a matter of probability. There’s not a 10% or a 40% chance that capitalism would destroy us if left unchecked, no, the capitalist economic system, which aims at maximising profit through economic expansion, is right now and at a global scale destroying the conditions of human life. This isn’t a hypothetical. It’s actually happening.

You can quibble about how long it would take for humanity to be wiped out by our own economic success, of course, but it’s a matter of logic that uncurtailed capitalist economy will destroy humanity. Perpetual depletion of a finite world necessarily bottoms out. And if you look at what’s happening to climate and agricultural land around the world, and if you consider that the development of AI itself has a capitalist, for-profit earmark, it’s clear that the risk capitalism poses right now is real and actual, and patently more urgent than the current risk of any future AI system.

I’m not saying we shouldn’t curtail the development and use of AI, as Hinton suggests. But it would be absolutely useless to do so if we don’t also curtail the development and existence of capitalist economies. So it’s time for our heartfelt expression of concern to mention the unmentionable and prioritise the risk posed by capitalism.

Activists, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from capitalist economies. Even so, it can be difficult to voice concerns about some of capitalism’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of capitalism’s most severe risks seriously.

Mitigating the risk of extinction from capitalism should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Shout it from the roofs.


All material on Indoxicate is licenced under a CC BY 4.0 licence, unless specified otherwise.