Indoxicate 🧠


â–¶ listen to this post

Taking the long view

How bad would human extinction be? To my mind, it seems that it all depends on how it happens. It would be strange to think that we should prevent extinction at all costs—in part because extinction is inevitable.

There’s a useful parallel here with one’s own extinction, one’s future death. One’s death is inevitable, but it need not be something to dread or fear. Yes, death becomes troubling when it is unjust, avoidable, unnecessary, or untimely. But in a life, death is often not the worst thing that could happen. Life can be more horrible than death.

These are perhaps difficult things to get to terms with, but I think that it shows that striving to die well is more important than striving not to die. And much the same holds for human extinction.

In the last couple of weeks, several excellent pieces have been published on an ethical programme that perversely twists utilitarian normative theory to argue that human extinction is the worst thing that could happen. According to this programme, our moral lives should revolve around eliminating existential risk. The human suffering we currently see around us is largely irrelevant, the thought goes, as it is dwarfed by the colossal threat of human extinction. Authors critical of this programme have already queued up. I particularly enjoyed reading Eric Schliesser’s critical reflections, as well as Nafeez Ahmed’s work uncovering the movement’s connections to eugenics.

Here I want to highlight a rather superficial aspect of this perverse programme. Superficial, but in my view nonetheless quite a crucial one: the programme’s name. In What We Owe to the Future, the book that currently is a prime vehicle of the movement, Will MacAskill writes in the introduction:

This book is about longtermism: the idea that positively influencing the longterm future is a key moral priority of our time. Longtermism is about taking seriously just how big the future could be and how high the stakes are in shaping it. If humanity survives to even a fraction of its potential life span, then, strange as it may seem, we are the ancients: we live at the very beginning of history, in the most distant past. What we do now will affect untold numbers of future people. We need to act wisely.

The term ’longtermism’ has already been taken up even by critics to talk about the ethical programme MacAskill and his conspirators defend. But notice how MacAskill in his introductory act of baptising slides from one idea to another. The first idea is that positively influencing the longterm future is a key moral priority. I agree. The second idea, however, is much more controversial and I disagree with it. This is that we should take seriously just how big the future could be.

What does this mean? MacAskill means that because we can influence the size of future populations, we can influence the amount of future happiness. The most efficient way of maximising happiness (the utilitarian maxim) is to make sure that we create as many future people as possible—or so MacAskill seems to think. For this, the ethical programme MacAskill spearheads promotes that we try to colonise space and find ways of uploading our minds to computers to accommodate as many as human lives as possible. That’s the positive ethical programme. But the single biggest threat to this is human extinction. If humans go extinct any time soon, trillions of potentially happy people will never see the light of day. In fact, every other form of current injustice or suffering will simply be outweighed by the many trillions of virtual minds the future could see. Hence, on this view, striving for humans not to go extinct is our most important ethical imperative.

MacAskill wants us to think of both ideas (that we should show heightened concern for the future, and that we should be concerned with promoting trillions of possible future lives) as the same, but they are not.

To be clear, I think the positive programme associated with all this is megalomanic and misguided. For one thing, it’s impossible to upload your mind to a computer (that people think it’s possible is due to the distorted lens through which many remember Descartes’s legacy ). And even if you think that we can scale up humanity in a non-virtual way, the problem of ethics is simply not one of scalability—it is perhaps not a surprise that the ethical programme MacAskill outlines is popular in Silicon Valley, where the default problem is always one of scalability.

But I want to set these points of criticism aside. I think that even before making such observations about the programme’s assumptions, and before we start a discussion about the programme’s merits (or lack thereof), we should object to the name MacAskill and others have appropriated for this view: longtermism.

That name implies that this is the programme that puts the future first, the programme that is willing to take the long view of things.

Names are superficial, but they matter. Even though they’re not in themselves truth-evaluable theses, names often bring covert presuppositions to a discussion. Often it is wise to look cautiously at the dialectical work a name does before we accept its use, and to block use of the name if this dialectical work is illegitimately undermining what we want to say or do, e.g. undermining our own views even before discussion has begun. And I propose that blocking is exactly what we should do in this case. We should refuse the name MacAskill and his sympathisers have appropriated. It’s ill-suited and misleading.

MacAskill’s act of baptism I quoted above implies that anyone who’s not on board with the programme he advances, with the idea that the problem of ethics is one of scalability, is someone who only really has eye for the short term. His appropriation of ’longtermism’ presupposes that anyone who does not think that human extinction is our most important ethical problem takes a short-sighted, myopic view of things. This is misleading and wrong.

The beamed ceiling of New College, Oxford

I’m reminded of a story about the ceiling beams of the dining hall at New College, Oxford. Contrary to what the name may lead you to think, New College is among the oldest Oxford colleges, founded in the 14th century. The vast dining hall was built around that time. In the 19th century, the roof of the dining hall was reaching the end of its life and the unusually long beams had to be replaced. The carpenters were worried it would be impossible to find trees tall enough to supply the wood required for the renovation. But then, someone discovered a record of how, back in the 14th century, when the hall was built, the College’s governing body had planted some trees for exactly the purpose of replacing the ceiling beams, once they would have come to the end of their life.

These medieval planners took the long view. They knew that planting those trees was the right thing to do. They didn’t just think of their own immediate circumstances of a brand new hall, but took into account events that would only happen centuries after their deaths, and the way their actions could make a difference to future lives. Yet the actions of those 14th century planners had nothing to do with procreation, scalability, or extinction. They were concerned with preserving a form of life they valued, and preserving it well into the future. If you care to stick an -ism on them, they were longtermists. We can and should learn from them. But what they did has little to do with the ethical programme outlined by MacAskill.

One of the biggest problems with present-day society is that it doesn’t take the long view. We don’t plant the trees needed for the future. Capitalism is about short-term profiteering, and this has infected society at every level. Only think of how governments responded to the pandemic, and of how climate policy continues to be inadequate. It shows a short-sightedness that is morally contemptible.

But we should be very careful to distinguish rejecting this endemic short-sightedness from endorsing the megalomanic ethics MacAskill and his sympathisers want us to adopt instead. Longtermism properly understood, taking the long view, may well mean to navigate an entirely different course.


All material on Indoxicate is licenced under a CC BY 4.0 licence, unless specified otherwise.