Call a moral theory “demanding” to the extent that conforming to its requirements makes its adherents worse off. Many people have complained that EA-style consequentialism is too demanding. For example, it may require many rich Westerners to devote significant amounts of time and money to helping people in extreme poverty. Many EAs like to emphasize that this requirement is not as demanding as it may appear. One reason: sacrifices of time and money are compensated by tremendous feelings of “self-actualization and excitement” from having made the world a better place.
For example, here is Peter Singer in “The Most Good You Can Do”:
Self-esteem is an important component of happiness. … The most solid basis for self-esteem is to live an ethical life, that is, a life in which one contributes to the greatest possible extent to making the world a better place. … When Henry Spira, the pioneering campaigner for animals whom we met in chapter 5, knew he did not have long to live, he said to me, “When I go, I want to look back and say: ‘I made this world a better place for others.’ But it’s not a sense of duty, rather, this is what I want to do. I feel best when I’m doing it well.”
Spira gave up a lot, but in doing so he enriched his life. Plausibly, this enrichment came from his knowledge that his successful activism had (as Singer puts it) “spared untold millions of animals acute pain and prolonged suffering.”
Thus, many altruists gain something from knowing that they have taken actions which have (likely) done good: saved children from malaria, prevented the torture of pigs, helped to reform a vicious criminal justice system. This gain offsets the putative demandingness of an altruistic lifestyle.
But now consider another kind of EA-style action: working on projects to mitigate existential risk, or x-risk. At this point in time, working to lower the chance of global catastrophe involves spending your time and money on projects that–by admission of those who prioritize x-risk–almost certainly will not achieve this aim. Of course, the potential value of the (unlikely) success is so enormous that x-risk projects have a huge expected value. As Nick Beckstead puts it, an x-risk-focused, expected value approach “asks us to be happy with having a very small probability of averting an existential catastrophe, on the grounds that the expected value of doing so is extremely enormous, even though there are more conventional ways of doing good which have a high probability of producing very good, but much less impressive, outcomes.”
Or, in Nick Bostrom’s stronger words, “Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy.” At the same time, as Bostrom acknowledges, “The problem of how to minimize existential risk has no known solution.” It’s a gamble.
I argue sacrificing time in money in pursuit of these low-probability, high-reward x-risk projects involves a distinct kind of demandingness from what is usually considered. Not only do you have to sacrifice your own well-being—not buying a sports car, or a nice guitar, or whatever. You also have to give up the satisfaction of knowing that you have taken actions which you believe have likely done good. This is something that many people find very important. But the kind of satisfaction that Singer talks about above, is not available to someone engaged in this type of risky project. Such a person might live and die without ever knowing whether they have done any good, and in fact might have a 99.9% credence that they have not.
So this is another sense in which being consequentialism may be very demanding indeed. Part of what makes us happy is the satisfaction of actually (or likely) helping people. Consequentialism can ask us to give up even this.
(Note that I do not take this to be reason to reject consequentialism, or to reject low-probability, high-reward projects. As it happens, I have never felt the force of arguments from demandingness; they seem to me presuppose from the outset what the correct moral theory should be like. My own instinct is: if morality ends up being extremely demanding, including in the “almost certainly do no good sense”, so be it).