Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:
For example, consider the following scenario from Olle Häggström (2016); quoting him at length:
“Recall … Bostrom’s conclusion about how reducing the probability of existential catastrophe by even a minuscule amount can be more important than saving the lives of a million people. While it is hard to find any flaw in his reasoning leading up to the conclusion [note: the present author objects], and while if the discussion remains sufficiently abstract I am inclined to accept it as correct, I feel extremely uneasy about the prospect that it might become recognized among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying “If you want to make an omelet, you must be willing to break a few eggs,” which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.”
Häggström offers several reasons why this scenario might not occur. For example, he suggests that “the annihilation of Germany would be bad for international political stability and increase existential risk from global nuclear war by more than one in a million.” But he adds that we should wonder “whether we can trust that our world leaders understand [such] points.” Ultimately, Häggström abandons total utilitarianism and embraces an absolutist deontological constraint according to which “there are things that you simply cannot do, no matter how much future value is at stake!” But not everyone would follow this lead, especially when assessing the situation from the point of view of the universe; one might claim that, paraphrasing Bostrom, as tragic as this event would be to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—it wouldn’t significantly affect the total amount of human suffering or happiness or determine the long-term fate of our species, except to ensure that we continue to exist (thereby making it possible to colonize the universe, simulate vast numbers of people on exoplanetary computers, and so on).
I think you don’t need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.
I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).
Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:
I think you don’t need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.
I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).