I tried to make this comment before, but for some reason it isn’t visible, so I’m reposting it.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism “the axiological difference between S1 and S2 is negligible”. It may be negligible compared to the difference between either and utopia, but that doesn’t mean it’s negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is “negligible” compared to the total disvalue in the world over the course of ten years doesn’t necessarily mean one is callous about the former.
Much though I dislike important conversations happening on Facebook rather than some more public forum, it’s probably worth people considering engaging here reading the pre-existing Facebook discussion here and here. At the very least we can avoid re-treading old ground.
I downvoted it based on things like calling John Halstead and Nick Beckstead white supremacists (based on extremely shaky argumentation) and apparently taking it as obvious that rejecting person-affecting views is morally monstrous.
I might make longer, more substantive comments later, but there are reasons to downvote this other than wanting to squash discussion of fanaticism.
It may be noted that in the thing I wrote on climate change I don’t actually defend long-termism or even avow belief in it.
For those who find it confusing that I, at best a mid-table figure in EA, get dragged into this stuff, the reason is that I once publicly criticised a post on Pinker that Phil wrote on Facebook (my critique was about three sentences). Phil has since then borne a baffling and persistent grudge against me, including persistently sending me messages on Facebook, name-checking me while making some rape allegations against some famous person I have never heard of, and then calling me a white supremacist. Hopefully, this gives some insight into Phil’s psychology and what is actually driving posts such as the one linked to here.
John: Do I have your permission to release screenshots of our exchange? You write: ”… including persistently sending me messages on Facebook.” I believe that this is very misleading.
For those who might worry that you’re being hyperbolic, I’d say that the linked paper doesn’t say that they are white supremacists. But it does claim that a major claim from Nick Beckstead’s thesis is white supremacist. Here is the relevant quote, from pages 27-28:
“As he [Beckstead] makes the point,
>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
This is overtly white-supremacist.”
The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn’t seem like a very helpful contribution to the discussion.
I actually find the argument that those arguing against prioritising climate change are aiding white supremacy[1] more alarming than the attack on Beckstead, even though the accusations there are more oblique.
While I think Beckstead’s argumentation here seems basically true, it is clearly somewhat incendiary in its implications and likely to make many people uncomfortable – it is a large bullet to bite, even if I think that calling it “overtly white-supremacist” is bad argumentation that risks substantially degrading the discourse[2].
Conversely, claiming that anyone who doesn’t explicitly prioritise a particular cause area is aiding white supremacy seems like extremely pernicious argumentation to me – an attempt to actively suppress critical prioritisation between cause areas and attack those trying to work out how to make difficult-but-necessary trade-offs. I think this style of argumentation makes good-faith disagreement over difficult prioritisation questions much harder, and contributes exceedingly little in return.
“Hence, dismissing climate change because it does not constitute an obstacle for creating Utopia reinforces unjust racial dynamics, and thus supports white supremacy.” (p. 27)
The document also claims (in footnote 13) that “the prevalence of such tendencies” (by which I assume is meant “overtly white-supremacist” tendencies, since the footnote is appended directly to that accusation) in EA longtermism “may be somewhat unsurprising” given EA’s racial make-up. I would find it quite surprising if many EAs were secretly harbouring white-supremacist leanings, and would require much stronger (or indeed any) evidence that this were the case before making such aspersions.
Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).
Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).
Is this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?
Yeah, I agree the facile use of “white supremacy” here is bad, and I do want to keep ad hominems out of EA discourse. Thanks for explaining this.
I guess I still think it makes important enough arguments that I’d like to see engagement, though I agree it would be better said in a more cautious and less accusatory way.
Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:
For example, consider the following scenario from Olle Häggström (2016); quoting him at length:
“Recall … Bostrom’s conclusion about how reducing the probability of existential catastrophe by even a minuscule amount can be more important than saving the lives of a million people. While it is hard to find any flaw in his reasoning leading up to the conclusion [note: the present author objects], and while if the discussion remains sufficiently abstract I am inclined to accept it as correct, I feel extremely uneasy about the prospect that it might become recognized among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying “If you want to make an omelet, you must be willing to break a few eggs,” which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.”
Häggström offers several reasons why this scenario might not occur. For example, he suggests that “the annihilation of Germany would be bad for international political stability and increase existential risk from global nuclear war by more than one in a million.” But he adds that we should wonder “whether we can trust that our world leaders understand [such] points.” Ultimately, Häggström abandons total utilitarianism and embraces an absolutist deontological constraint according to which “there are things that you simply cannot do, no matter how much future value is at stake!” But not everyone would follow this lead, especially when assessing the situation from the point of view of the universe; one might claim that, paraphrasing Bostrom, as tragic as this event would be to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—it wouldn’t significantly affect the total amount of human suffering or happiness or determine the long-term fate of our species, except to ensure that we continue to exist (thereby making it possible to colonize the universe, simulate vast numbers of people on exoplanetary computers, and so on).
I think you don’t need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.
I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).
I think the paper title is clickbaity and misleading, given that you argue narrowly against Bostrom’s conception of existential risk rather than the broader idea of x-risk itself.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism “the axiological difference between S1 and S2 is negligible”. It may be negligible compared to the difference between either and utopia, but that doesn’t mean it’s negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is “negligible” compared to the total disvalue in the world over the course of ten years doesn’t necessarily mean one is callous about the former.
technological development proceeds from the time of this writing (in 2020) for another decade. Cures for pathologies like Alzheimer’s, diabetes, and heart disease are discovered. New strategies for preventing large-scale outbreaks of infectious disease are developed, and life expectancy around the world increases to 95 years old. The human population stabilizes at around 8 billion people… But at the end of this decade, technological progress stalls permanently: the conditions realized at the end of the decade are the conditions that hold for the next 1 billion years, at which point Earth becomes uninhabitable due to the sun’s growing luminosity. Nonetheless, many trillions and trillions of humans will come to exist in these conditions, with more opportunities for self-actualization than ever before. (pp. 13-14)
I agree that this is not an existential catastrophe, at least on timescales of less than a billion years, provided that humanity is not permanently prevented from leaving Earth. To me, an “existential catastrophe” is an event that causes humanity’s welfare or the quality of its moral values to permanently fall far below present-day levels, e.g. to pre-industrial levels. At most, I’d be disappointed if technology plateaued at a level above the present day’s technological progress.
However, I’d consider it an existential catastrophe if humanity permanently lost the ability to settle outer space, because that would make our eventual extinction inevitable.
I think the key message a lot of people will take away from this post is “Your entire philosophy and way of life is wrong—it doesn’t matter if everyone dies.”
What is the key message you actually want people to take away from this post?
ælijah: If you’re going to accuse other users of having read something superficially, please explain your views in more detail. What do you think the paper’s key message is, and what sections/excerpts make you believe this?
I’ll note that Khorton didn’t suggest that “it doesn’t matter if everyone dies” was what the post’s author actually meant to convey—instead, she expressed concern that it could be read in that way, and asked the author to clarify.
Also, speaking as a Forum moderator: the tone of your comment wasn’t really in keeping with the Forum’s rules. We discourage even mildly abrasive language if it doesn’t contain enough detail for people to be able to respond to your points.
Thanks for clarifying. This topic has generally been contentious, so I want to be careful to keep the discussion based on substantive discussion of Torres’ ideas or specific wording.
In light of these concerns, will you be changing your Twitter handle, ‘xriskology’, which I think you registered when you were a big supporter of the concept?
Or maybe you’ll edit your website xriskology.com, which among other things promotes your book ‘Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks’?
I tried to make this comment before, but for some reason it isn’t visible, so I’m reposting it.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism “the axiological difference between S1 and S2 is negligible”. It may be negligible compared to the difference between either and utopia, but that doesn’t mean it’s negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is “negligible” compared to the total disvalue in the world over the course of ten years doesn’t necessarily mean one is callous about the former.
Much though I dislike important conversations happening on Facebook rather than some more public forum, it’s probably worth people considering engaging here reading the pre-existing Facebook discussion here and here. At the very least we can avoid re-treading old ground.
I think the concerns about utopianism are well-placed and merit more discussion in effective altruism. I’m sad to see the post getting downvoted.
I downvoted it based on things like calling John Halstead and Nick Beckstead white supremacists (based on extremely shaky argumentation) and apparently taking it as obvious that rejecting person-affecting views is morally monstrous.
I might make longer, more substantive comments later, but there are reasons to downvote this other than wanting to squash discussion of fanaticism.
It may be noted that in the thing I wrote on climate change I don’t actually defend long-termism or even avow belief in it.
For those who find it confusing that I, at best a mid-table figure in EA, get dragged into this stuff, the reason is that I once publicly criticised a post on Pinker that Phil wrote on Facebook (my critique was about three sentences). Phil has since then borne a baffling and persistent grudge against me, including persistently sending me messages on Facebook, name-checking me while making some rape allegations against some famous person I have never heard of, and then calling me a white supremacist. Hopefully, this gives some insight into Phil’s psychology and what is actually driving posts such as the one linked to here.
John: Do I have your permission to release screenshots of our exchange? You write: ”… including persistently sending me messages on Facebook.” I believe that this is very misleading.
please do
Thanks for pointing that out!
For those who might worry that you’re being hyperbolic, I’d say that the linked paper doesn’t say that they are white supremacists. But it does claim that a major claim from Nick Beckstead’s thesis is white supremacist. Here is the relevant quote, from pages 27-28:
“As he [Beckstead] makes the point,
>> saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
This is overtly white-supremacist.”
The document elsewhere clarifies that it is using the term white supremacism to refer to systems that reinforce white power, not only to explicit, conscious racism. But I agree that this is far enough from how most people use the terminology that it doesn’t seem like a very helpful contribution to the discussion.
Thanks, I agree with this clarification.
I actually find the argument that those arguing against prioritising climate change are aiding white supremacy[1] more alarming than the attack on Beckstead, even though the accusations there are more oblique.
While I think Beckstead’s argumentation here seems basically true, it is clearly somewhat incendiary in its implications and likely to make many people uncomfortable – it is a large bullet to bite, even if I think that calling it “overtly white-supremacist” is bad argumentation that risks substantially degrading the discourse[2].
Conversely, claiming that anyone who doesn’t explicitly prioritise a particular cause area is aiding white supremacy seems like extremely pernicious argumentation to me – an attempt to actively suppress critical prioritisation between cause areas and attack those trying to work out how to make difficult-but-necessary trade-offs. I think this style of argumentation makes good-faith disagreement over difficult prioritisation questions much harder, and contributes exceedingly little in return.
“Hence, dismissing climate change because it does not constitute an obstacle for creating Utopia reinforces unjust racial dynamics, and thus supports white supremacy.” (p. 27)
The document also claims (in footnote 13) that “the prevalence of such tendencies” (by which I assume is meant “overtly white-supremacist” tendencies, since the footnote is appended directly to that accusation) in EA longtermism “may be somewhat unsurprising” given EA’s racial make-up. I would find it quite surprising if many EAs were secretly harbouring white-supremacist leanings, and would require much stronger (or indeed any) evidence that this were the case before making such aspersions.
Yeah, agreed that using the white supremacist label needlessly poisons the discussion in both cases.
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).
Maybe this is a bit off-topic, but I think it’s worth illustrating that there’s no sense in which the longtermist discussion about saving lives necessarily pushes in a so-called “white supremacist” direction.
Is this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?
No.
Yeah, I agree the facile use of “white supremacy” here is bad, and I do want to keep ad hominems out of EA discourse. Thanks for explaining this.
I guess I still think it makes important enough arguments that I’d like to see engagement, though I agree it would be better said in a more cautious and less accusatory way.
I’m sad to see this comment get downvoted.
Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:
I think you don’t need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.
I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).
I think the paper title is clickbaity and misleading, given that you argue narrowly against Bostrom’s conception of existential risk rather than the broader idea of x-risk itself.
I think this is an interesting paper. I gave it an upvote.
One comment: It is misleading to say that on total utilitarianism + longtermism “the axiological difference between S1 and S2 is negligible”. It may be negligible compared to the difference between either and utopia, but that doesn’t mean it’s negligible in absolute terms. Saying that the disvalue of a single terrible thing happening to one person is “negligible” compared to the total disvalue in the world over the course of ten years doesn’t necessarily mean one is callous about the former.
I agree that this is not an existential catastrophe, at least on timescales of less than a billion years, provided that humanity is not permanently prevented from leaving Earth. To me, an “existential catastrophe” is an event that causes humanity’s welfare or the quality of its moral values to permanently fall far below present-day levels, e.g. to pre-industrial levels. At most, I’d be disappointed if technology plateaued at a level above the present day’s technological progress.
However, I’d consider it an existential catastrophe if humanity permanently lost the ability to settle outer space, because that would make our eventual extinction inevitable.
I think the key message a lot of people will take away from this post is “Your entire philosophy and way of life is wrong—it doesn’t matter if everyone dies.”
What is the key message you actually want people to take away from this post?
If they read superficially, yes. Would you prefer he explicitly say in the abstract “I think it’s bad if everyone dies”?
ælijah: If you’re going to accuse other users of having read something superficially, please explain your views in more detail. What do you think the paper’s key message is, and what sections/excerpts make you believe this?
I’ll note that Khorton didn’t suggest that “it doesn’t matter if everyone dies” was what the post’s author actually meant to convey—instead, she expressed concern that it could be read in that way, and asked the author to clarify.
Also, speaking as a Forum moderator: the tone of your comment wasn’t really in keeping with the Forum’s rules. We discourage even mildly abrasive language if it doesn’t contain enough detail for people to be able to respond to your points.
I apologize. I meant my comment to say that the paper wouldn’t be misunderstood in that way by its readership as a whole if it were read carefully.
On further thought, I think it could be reasonably argued that the abstract actually should explicitly say “I think it’s bad if everyone dies”.
Thanks for clarifying. This topic has generally been contentious, so I want to be careful to keep the discussion based on substantive discussion of Torres’ ideas or specific wording.
In light of these concerns, will you be changing your Twitter handle, ‘xriskology’, which I think you registered when you were a big supporter of the concept?
Or maybe you’ll edit your website xriskology.com, which among other things promotes your book ‘Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks’?
The author still cares about x-risks, just not in the Bostroniam way. Here’s the first sentence from the abstract:
Weird that you made a throwaway just to leave a sarcastic and misguided comment.