One of my most confusing experiences with EA in the last couple of month has been this poll https://www.facebook.com/groups/effective.altruists/permalink/3127490440640625/ where you and your colleauge Magnus stated that one day of extreme suffering (drowning in lava) could not be outweighed by even an (almost) infinite number of days experience extreme happiness (which was the answer with the most upvotes). Some stated in the comments that even a chance of “1 in a gogol probability of 1 minute in lava” could never be outweighed by an (almost) infinite number of days experiencing extreme happiness.
To be honest these sound like extremely strange and unituitive views to me and made me wonder if EAs are different compared to the general population in ways I haven’t much thought about (eg less happy in general). So I have several questions:
1. Do you know about any good articles etc. that make the case for such views? 2. Do you think such or similar views are necessary to prioritize S-Risks? 3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments? 4 For me it seems like people constantly trade happiness for suffering (taking drugs expecting a hangover, eating unhealthy stuff expecting health problems or even just feeling full, finishing that show on Netflix instead of going to sleep… ). Those are reasons for me to believe that most people might not want to compensate suffering through happiness 1:1 , but are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.
Disclaimer: I haven’t spent much time researching S-Risks, so if I got it all wrong (including the poll), just let me know.
Concerning how EA views on this compare to the views of the general population, I suspect they aren’t all that different. Two bits of weak evidence:
I.
Brian Tomasik did a small, admittedly unrepresentative and imperfect Mechanical Turk survey in which he asked people the following:
At the end of your life, you’ll get an additional X years of happy, youthful, and interesting life if you first agree to be covered in gasoline and burned in flames for one minute. How big would X have to be before you’d accept the deal?
More than 40 percent said that they would not accept it “regardless of how many extra years of life” they would get (see the link for some discussion of possible problems with the survey).
II.
The Future of Life Institute did a Superintelligence survey in which they asked, “What should a future civilization strive for?” A clear plurality (roughly a third) answered “minimize suffering” — a rather different question, to be sure, but it does suggest that a strong emphasis on reducing suffering is very common.
1. Do you know about any good articles etc. that make the case for such views?
I’ve tried to defend such views in chapter 4 and 5 here (with replies to some objections in chapter 8). Brian Tomasik has outlined such a view here and here.
And many more have defended views according to which happiness and suffering are, as it were, morally orthogonal.
2. Do you think such or similar views are necessary to prioritize S-Risks?
As Tobias said: No. Many other views can support such a priority. Some of them are reviewed in chapter 1, 6, and 14 here.
3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments?
I say a bit on this in footnote 23 in chapter 1 and in section 4.5 here.
4 For me it seems like people constantly trade happiness for suffering … Those are reasons for me to believe that most people … are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.
Many things to say on this. First, as Tobias hinted, acceptable intrapersonal tradeoffs cannot necessarily be generalized to moral interpersonal ones (cf. sections 3.2 and 6.4 here). Second, there is the point Jonas made, which is discussed a bit in section 2.4 in ibid. Third, tradeoffs concerning mild forms of suffering that a person agrees to undergo do not necessarily say much about tradeoffs concerning states of extreme suffering that the sufferer finds unbearable and is unable to consent to (e.g. one may endorse lexicality between very mild and very intense suffering, cf. Klocksiem, 2016, or think that voluntarily endured suffering occupies a different moral dimension than does suffering that is unbearable and which cannot be voluntarily endured). More considerations of this sort are reviewed in section 14.3, “The Astronomical Atrocity Problem”, here.
4 For me it seems like people constantly trade happiness for suffering (taking drugs expecting a hangover, eating unhealthy stuff expecting health problems or even just feeling full, finishing that show on Netflix instead of going to sleep… ). Those are reasons for me to believe that most people might not want to compensate suffering through happiness 1:1 , but are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.
One counterargument that has been raised against this is that people just accept suffering in order to avoid other forms of suffering. E.g., you might feel bored if you don’t take drugs, might have uncomfortable cravings for unhealthy food if you don’t eat it, etc.
I do think this point could be part of an interesting argument, but as it stands, it merely offers an alternative explanation without analyzing carefully which of the two explanations is correct. So on its own, this doesn’t seem to be a strong counterargument yet.
Thanks for the reply. With regard to drugs I think it depends on the situation. Many people drink alcohol even if they are in a good mood already to get even more excited (while being fully aware that they might experience at least some kind of suffering the next day and possibly long term). In this case I think one couldn’t say they do it to avoid suffering (unless you declare everything below the best possible experience suffering). There are obviously other cases were people just want to stop thinking about their problems, stop feeling a physical pain etc.
I don’t think that if someone rejects the rationality of trading off neutrality for a combination of happiness and suffering, they need to explain every case of this. (Analogously, the fact that people often do things for reasons other than maximizing pleasure and minimizing pain isn’t an argument against ethical hedonism, just psychological hedonism.) Some trades might just be frankly irrational or mistaken, and one can point to biases that lead to such behavior.
I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting—a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.
Another possibility is lexicality: one may contend that only certain particularly bad forms of suffering can’t be outweighed. You may find such views counterintuitive, but it is worth noting that lexicality can be multi-dimensional and need not involve abrupt breaks. It is, for instance, quite possible to hold the view that 1 minute of lava is ‘outweighable’ but 1 day is not. (I think I would not have answered “no amount can compensate” if it was about 1 minute.)
I also sympathise with the view mentioned by Jonas: that happiness matters mostly in so far as an existing being has a craving or desire to experience it. The question, then, is just how strong the desire to experience a certain timespan of bliss is. The poll was just about how I would do this tradeoff for myself, and it just so happens that abstract prospects of bliss does not evoke a very strong desire in me. It’s certainly not enough to accept a day of lava drowning—and that is true regardless of how long the bliss lasts. Your psychology may be different but I don’t think there’s anything inconsistent or illogical about my preferences.
One of my most confusing experiences with EA in the last couple of month has been this poll https://www.facebook.com/groups/effective.altruists/permalink/3127490440640625/ where you and your colleauge Magnus stated that one day of extreme suffering (drowning in lava) could not be outweighed by even an (almost) infinite number of days experience extreme happiness (which was the answer with the most upvotes). Some stated in the comments that even a chance of “1 in a gogol probability of 1 minute in lava” could never be outweighed by an (almost) infinite number of days experiencing extreme happiness.
To be honest these sound like extremely strange and unituitive views to me and made me wonder if EAs are different compared to the general population in ways I haven’t much thought about (eg less happy in general). So I have several questions:
1. Do you know about any good articles etc. that make the case for such views?
2. Do you think such or similar views are necessary to prioritize S-Risks?
3. Do you think most people would/should vote in such a way if they had enough time to consider the arguments?
4 For me it seems like people constantly trade happiness for suffering (taking drugs expecting a hangover, eating unhealthy stuff expecting health problems or even just feeling full, finishing that show on Netflix instead of going to sleep… ). Those are reasons for me to believe that most people might not want to compensate suffering through happiness 1:1 , but are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.
Disclaimer: I haven’t spent much time researching S-Risks, so if I got it all wrong (including the poll), just let me know.
Concerning how EA views on this compare to the views of the general population, I suspect they aren’t all that different. Two bits of weak evidence:
I.
Brian Tomasik did a small, admittedly unrepresentative and imperfect Mechanical Turk survey in which he asked people the following:
More than 40 percent said that they would not accept it “regardless of how many extra years of life” they would get (see the link for some discussion of possible problems with the survey).
II.
The Future of Life Institute did a Superintelligence survey in which they asked, “What should a future civilization strive for?” A clear plurality (roughly a third) answered “minimize suffering” — a rather different question, to be sure, but it does suggest that a strong emphasis on reducing suffering is very common.
I’ve tried to defend such views in chapter 4 and 5 here (with replies to some objections in chapter 8). Brian Tomasik has outlined such a view here and here.
But many authors have in fact defended such views about extreme suffering. Among them are Ingemar Hedenius (see Knutsson, 2019); Ohlsson, 1979 (review); Mendola, 1990; 2006; Mayerfeld, 1999, p. 148, p. 178; Ryder, 2001; Leighton, 2011, ch. 9; Gloor, 2016, II.
And many more have defended views according to which happiness and suffering are, as it were, morally orthogonal.
As Tobias said: No. Many other views can support such a priority. Some of them are reviewed in chapter 1, 6, and 14 here.
I say a bit on this in footnote 23 in chapter 1 and in section 4.5 here.
Many things to say on this. First, as Tobias hinted, acceptable intrapersonal tradeoffs cannot necessarily be generalized to moral interpersonal ones (cf. sections 3.2 and 6.4 here). Second, there is the point Jonas made, which is discussed a bit in section 2.4 in ibid. Third, tradeoffs concerning mild forms of suffering that a person agrees to undergo do not necessarily say much about tradeoffs concerning states of extreme suffering that the sufferer finds unbearable and is unable to consent to (e.g. one may endorse lexicality between very mild and very intense suffering, cf. Klocksiem, 2016, or think that voluntarily endured suffering occupies a different moral dimension than does suffering that is unbearable and which cannot be voluntarily endured). More considerations of this sort are reviewed in section 14.3, “The Astronomical Atrocity Problem”, here.
Thanks a lot for the reply and all the links.
One counterargument that has been raised against this is that people just accept suffering in order to avoid other forms of suffering. E.g., you might feel bored if you don’t take drugs, might have uncomfortable cravings for unhealthy food if you don’t eat it, etc.
I do think this point could be part of an interesting argument, but as it stands, it merely offers an alternative explanation without analyzing carefully which of the two explanations is correct. So on its own, this doesn’t seem to be a strong counterargument yet.
Thanks for the reply. With regard to drugs I think it depends on the situation. Many people drink alcohol even if they are in a good mood already to get even more excited (while being fully aware that they might experience at least some kind of suffering the next day and possibly long term). In this case I think one couldn’t say they do it to avoid suffering (unless you declare everything below the best possible experience suffering). There are obviously other cases were people just want to stop thinking about their problems, stop feeling a physical pain etc.
I don’t think that if someone rejects the rationality of trading off neutrality for a combination of happiness and suffering, they need to explain every case of this. (Analogously, the fact that people often do things for reasons other than maximizing pleasure and minimizing pain isn’t an argument against ethical hedonism, just psychological hedonism.) Some trades might just be frankly irrational or mistaken, and one can point to biases that lead to such behavior.
I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting—a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.
Another possibility is lexicality: one may contend that only certain particularly bad forms of suffering can’t be outweighed. You may find such views counterintuitive, but it is worth noting that lexicality can be multi-dimensional and need not involve abrupt breaks. It is, for instance, quite possible to hold the view that 1 minute of lava is ‘outweighable’ but 1 day is not. (I think I would not have answered “no amount can compensate” if it was about 1 minute.)
I also sympathise with the view mentioned by Jonas: that happiness matters mostly in so far as an existing being has a craving or desire to experience it. The question, then, is just how strong the desire to experience a certain timespan of bliss is. The poll was just about how I would do this tradeoff for myself, and it just so happens that abstract prospects of bliss does not evoke a very strong desire in me. It’s certainly not enough to accept a day of lava drowning—and that is true regardless of how long the bliss lasts. Your psychology may be different but I don’t think there’s anything inconsistent or illogical about my preferences.
Thanks a lot for the reply and the links.