Thank you for clarifying, Vasco—and for the welcome. I think it’s important to distinguish between active reasoned preferences versus instinctive responses. There are lots of things that humans and other animals do instinctively that they might also choose not to do if given an informed choice. A trivial example—I scratch bug bites instinctively, including sometimes in my sleep, even though my preference is not to scratch them. There’s lots of other examples in the world from criminals who look directly at CCTV cameras with certain sounds to turtles who go towards man-made lights instead of the ocean—and I’m sure many examples better than these ones I am thinking of off the top of my head. But in short, I am very reluctant to draw inferences on preferences from instinctive behaviour. I don’t think the two are always linked. I’m also not sure—if we could theoretically communicate such a question to them—what proportion of non-human animals are capable of the level of thinking to be able to consider whether they would want to continue living or not if given the option.
I agree with you that it is unclear whether the total sum of experiences on Earth is positive or negative; but I also don’t necessarily believe that there is an equivalence or that positive experiences can be netted off against negative experiences so I’m not convinced that considering all beings experiences as a ‘total’ is the moral thing to do. If we do try and total them all together to get some kind of net positive or negative, how do you balance them out—how much happiness is someone’s torture worth or netted off against in this scenario? It feels very dangerous to me to try to infer some sort of equivalency. I personally feel that only the individuals affected by the suffering can say under what circumstances they feel the suffering is worth it—particularly as different people can respond to and interpret the same stimuli differently. Like you, I am certainly not inclined to start killing people off against their will (and ‘against their will’ is a qualifier which adds completely different dimensions to the scenario; killing individuals is also extremely different to a hypothetical button painlessly ending all life—if you end all life, there is noone to mourn or to be upset or to feel pain or indeed injustice about individuals no longer being alive, which obviously isn’t the case if you are talking about solitary deaths). If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/levels of suffering. To go back to the original post, what I was defending is the need for there to be more active discussion of what the implications are of accepting that concept. I do fear that because many humans find it uncomfortable to talk about death and because we may personally prefer to be alive, it can be uncomfortable to think about and acknowledge the volume of suffering that exists. It’s a reasonably frequent lament in the EA world that not enough people care about the suffering of non-human animals and there is criticism of people who are viewed to be effectively ignoring the plight of animals in the food industry because they’d rather not know about it or not think about it. I worry though that many in EA do the same thing with this kind of question. I think we write off too easily the hypothetical kill all painlessly button because there’s an instinctive desire to live and those of us who are happy living would rather not think about how many beings might prefer nothingness to living if given a choice. I’m not saying I definitely would push such a button but I am saying that I think a lot of people who say they definitely wouldn’t say it instinctively rather than because they’ve given adequate consideration to the scenario. Is it really so black and white as we definitely shouldn’t press that hypothetical button—and if it is, what are the implications of that? We value positive experiences more than we disvalue suffering? We think some level of happiness can justify or balance out extreme suffering? What’s the tipping point—if every being on Earth was being endlessly tortured, should we push the button? What about if every being on Earth bar one? What if it’s 50/50? I will readily admit I do not have a philosophy PhD, I have lots of further reading to do in this space and I am not ready myself to say definitively what my view is on the hypothetical button one way or the other, but I do personally view death or non-existence as a neutral state, I do view suffering as a negative to be avoided and I do think there’s an asymmetry between suffering and happiness/positive wellbeing. With that in mind I really don’t think that there is any level of human satisfaction that I would be comfortable saying ‘this volume of human joy/positive wellbeing is worth/justifies the continuation of one human being subject to extreme torture’. If that’s the case, can I really say it’s the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering?
I am very reluctant to draw inferences on preferences from instinctive behaviour
Fair! I would say instinctive behaviour could provide a prior for what beings want, but that we should remain open to going against them given enough evidence. I have complained about Our World in Data’s implicitly assuming that nature conservation is good.
If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/levels of suffering.
Agreed. For what it is worth, I estimated 6.37 % of people have negative lives. This is one reason I prefer using WELLBYs instead of DALYs/QALYs, which assume lives are always positive.
Is it really so black and white as we definitely shouldn’t press that hypothetical button—and if it is, what are the implications of that?
It is quite clear to me I should not painlessly eliminate all sentient beings forever. Even though I have no idea about whether the current total welfare is positive/negative, I am more confident that future total welfare is positive. I expect intelligent beings to control an ever increasing fraction of the resources in the universe. I estimated the scale of wild animal welfare is 50.8 M times that of human welfare, but this ratio used to be orders of magnitude larger when there were only a few humans. Extrapolating how this ratio has evolved across time into the future suggests the welfare of the beings in control of the future (humans now, presumably digital beings in the future) will dominate. In addition, I expect intelligent beings like humans to have positive lives for the most part, so I am guessing the expected value of the future is positive.
Even if I though the expected value of the future was negative, I would not want to press the button. In this case, pressing the button would be good, as it would increase the value of the future from negative to neutral. However, I guess there would be actions available to me which could make the future positive, thus being better than just pressing the button. For example, conditional on me having the chance to press such button, I would likely have a super important position in the world government, so I could direct lots of resources towards investigating which beings are having positive and negative lives, and then painlessly eliminate or improve the negative ones to maximise total welfare.
We value positive experiences more than we disvalue suffering?
As long as positive and negative experiences are being measured in the same unit, 1 unit of welfare plus 1 unit of suffering cancel out.
We think some level of happiness can justify or balance out extreme suffering?
I think so, as I strongly endorse the tota view. Yet, there are physical limits. If the amount of suffering is sufficiently large, there may not be enough energy in the universe to produce enough happiness to outweigh it.
What’s the tipping point—if every being on Earth was being endlessly tortured, should we push the button?
If there was no realistic way of stopping the widespread torture apart from killing everyone involved, I would be happy with killing all humans. However, I do not think it would be good to kill all beings, as I think wild animals have good lives, although I am quite uncertain.
I do think there’s an asymmetry between suffering and happiness/positive wellbeing
In which sense do you think there is an asymmetry? As I said above, I think 1 unit of welfare plus 1 unit of suffering cancel out. However, I think it is quite possible that the maximum amount of suffering Smax which can be produced with a certain amount of energy exceeds the maximum amount of happiness Hmax which can be produced with the same energy. On the other hand, I think the opposite is also possible, so I am guessing Smax = Hmax (relatedly), although the total view does not require this.
With that in mind I really don’t think that there is any level of human satisfaction that I would be comfortable saying ‘this volume of human joy/positive wellbeing is worth/justifies the continuation of one human being subject to extreme torture’. If that’s the case, can I really say it’s the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering?
In the 1st sentence above, I think you are saying that “arbitrarily large amount of happiness”*”value of happiness” ⇐ “some amount of extreme suffering”*”disvalue of extreme suffering”, i.e. “value of happiness” ⇐ “some amount of extreme suffering”*”disvalue of extreme suffering”/”arbitrarily large amount of happiness”. This inequality tends to “value of happiness” ⇐ 0 as “arbitrarily large amount of happiness” goes to infinity, and by definition “value of happiness” >= 0 (otherwise it would not be happiness, but suffering). So I believe your 1st sentence implies “value of happiness” = 0. In other words, I would say you are valuing happiness the same as non-existence. In this case, having maximally happy beings would be as valuable as non-existence. So painlessly eliminating all beings forever by pressing the button would be optimal, in the sense there is no action which would produce more value.
Of course, I personally do not think it makes any sense to value happiness and non-existence the same. I assume most people would have the same view on reflection.
I think—probably about 90% sure rather than 100% - I agree that happiness is preferable to non-existence. However, I don’t think there’s an urgency/moral imperative to act to create happiness over neutral states in the same way that there is an urgency and moral imperative to reduce suffering. I.e. I think it’s much more important to spend the world’s resources reducing suffering (taking people from a position of suffering to a position of neutral needs met/not in suffering) than to spend resources on boosting people from a neutral needs met state (which needn’t be non-existence) to a heightened ‘happiness’ state. I view that both: the value difference between neutral and suffering is much larger than the value difference between neutral and happiness AND that there is a moral imperative to reduce suffering where there isn’t necessarily a moral imperative to increase happiness.
To give an example, if presented with the option to either give someone a paracetamol for a mild headache or to give someone a bit of cake that they would enjoy (but do not need—they are not in famine/hunger), I would always choose the painkiller. And—perhaps I’m wrong—I think this would be quite a common preference in the general population. I think most people on a case by case basis would make statements that indicate they do believe we should prioritise suffering. Yet, when we talk on aggregate, suffering-prioritisation seems to be less prevalent. It reminds me of some of the examples in the Frames and Reality chapter of Thinking Fast and Slow about how people will respond the essentially the same scenario differently depending on it’s framing.
WIth apologies for getting a bit dark - (with the possible exclusion of sociopaths etc.), I think people in general would agree they would refuse an ice-cream or the joy of being on a rollercoaster if the cost of it was that someone would be tortured or raped. My point is that I can’t think of any amount of positive/happiness that I would be willing to say yes, this extra happiness for me balances out someone else being raped. So there are at least some examples of suffering, that I just don’t think can be offset by any amount of happiness and therefore my viewpoint definitely includes asymmetry between happiness and suffering. Morally, I just don’t think I can accept a view that says some amount of happiness can offset someone else’s rape or torture.
And I am concerned that the views of people who have experience significant suffering are very under-represented and we don’t think about their viewpoints because it’s easier not to and they often don’t have a platform. What proportion of people working in population ethics have experienced destitution or been a severe burns victim? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?
Hi, sorry if I’m a bit late here, and I don’t want to be repeating myself too much, but since I feel it was not properly understood, one of the main points I originally made in this thread and I want to really hit home is that happiness as measured while in a state of happiness cannot be compared in any way to non-existence as “measured” in a state of non-existence, since we obviously cannot perceive sensations (or literally anything) when dead/not in existence. So the common intuition that happiness is preferable to non-existence is based upon our shallow understanding of what it is to “be” dead/non-existant, but from a rational point of view this idea simply does not hold. If I was being tortured with no way out, I would certainly want to die as quickly as I could, however when I imagine death in that moment, I am imagining (while in the state of suffering, and not in the “state” of death) a cessation of that suffering. However, to experience such a cessation I must be able to experience something to which I can compare against said experience of suffering. So technically speaking it doesn’t make any sense at all to say that happiness/suffering is better than non-existence as measured in the respective states of happiness/suffering and death/non-existence. It’s
And it’s not like death/non-existence is neutral in this case. If you picture a scale, with positive experiences (e.g. happiness/satisfaction) in the positive direction and negative experiences (e.g. pain/suffering) in the negative direction, death does NOT appear at 0 since what we are measuring is the perceived value of the experiences. Put another way in terms of utility functions, if someone has a utility function at some value, and then they die, rather than immediately going to zero, their utility function immediately ceases to exist, as a utility function must belong to someone.
Also this idea of mine is somewhat new to me (a few months old maybe), so I haven’t thought through many implications and edge-cases too thoroughly (yet). However this idea, however difficult for me to wrestle with, is something which I find myself simply unable to reason out of.
Thank you for clarifying, Vasco—and for the welcome. I think it’s important to distinguish between active reasoned preferences versus instinctive responses. There are lots of things that humans and other animals do instinctively that they might also choose not to do if given an informed choice. A trivial example—I scratch bug bites instinctively, including sometimes in my sleep, even though my preference is not to scratch them. There’s lots of other examples in the world from criminals who look directly at CCTV cameras with certain sounds to turtles who go towards man-made lights instead of the ocean—and I’m sure many examples better than these ones I am thinking of off the top of my head. But in short, I am very reluctant to draw inferences on preferences from instinctive behaviour. I don’t think the two are always linked. I’m also not sure—if we could theoretically communicate such a question to them—what proportion of non-human animals are capable of the level of thinking to be able to consider whether they would want to continue living or not if given the option.
I agree with you that it is unclear whether the total sum of experiences on Earth is positive or negative; but I also don’t necessarily believe that there is an equivalence or that positive experiences can be netted off against negative experiences so I’m not convinced that considering all beings experiences as a ‘total’ is the moral thing to do. If we do try and total them all together to get some kind of net positive or negative, how do you balance them out—how much happiness is someone’s torture worth or netted off against in this scenario? It feels very dangerous to me to try to infer some sort of equivalency. I personally feel that only the individuals affected by the suffering can say under what circumstances they feel the suffering is worth it—particularly as different people can respond to and interpret the same stimuli differently.
Like you, I am certainly not inclined to start killing people off against their will (and ‘against their will’ is a qualifier which adds completely different dimensions to the scenario; killing individuals is also extremely different to a hypothetical button painlessly ending all life—if you end all life, there is noone to mourn or to be upset or to feel pain or indeed injustice about individuals no longer being alive, which obviously isn’t the case if you are talking about solitary deaths). If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/levels of suffering. To go back to the original post, what I was defending is the need for there to be more active discussion of what the implications are of accepting that concept. I do fear that because many humans find it uncomfortable to talk about death and because we may personally prefer to be alive, it can be uncomfortable to think about and acknowledge the volume of suffering that exists. It’s a reasonably frequent lament in the EA world that not enough people care about the suffering of non-human animals and there is criticism of people who are viewed to be effectively ignoring the plight of animals in the food industry because they’d rather not know about it or not think about it. I worry though that many in EA do the same thing with this kind of question. I think we write off too easily the hypothetical kill all painlessly button because there’s an instinctive desire to live and those of us who are happy living would rather not think about how many beings might prefer nothingness to living if given a choice. I’m not saying I definitely would push such a button but I am saying that I think a lot of people who say they definitely wouldn’t say it instinctively rather than because they’ve given adequate consideration to the scenario. Is it really so black and white as we definitely shouldn’t press that hypothetical button—and if it is, what are the implications of that? We value positive experiences more than we disvalue suffering? We think some level of happiness can justify or balance out extreme suffering? What’s the tipping point—if every being on Earth was being endlessly tortured, should we push the button? What about if every being on Earth bar one? What if it’s 50/50?
I will readily admit I do not have a philosophy PhD, I have lots of further reading to do in this space and I am not ready myself to say definitively what my view is on the hypothetical button one way or the other, but I do personally view death or non-existence as a neutral state, I do view suffering as a negative to be avoided and I do think there’s an asymmetry between suffering and happiness/positive wellbeing. With that in mind I really don’t think that there is any level of human satisfaction that I would be comfortable saying ‘this volume of human joy/positive wellbeing is worth/justifies the continuation of one human being subject to extreme torture’. If that’s the case, can I really say it’s the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering?
Thanks for clarifying too! Strongly upvoted.
Fair! I would say instinctive behaviour could provide a prior for what beings want, but that we should remain open to going against them given enough evidence. I have complained about Our World in Data’s implicitly assuming that nature conservation is good.
Agreed. For what it is worth, I estimated 6.37 % of people have negative lives. This is one reason I prefer using WELLBYs instead of DALYs/QALYs, which assume lives are always positive.
It is quite clear to me I should not painlessly eliminate all sentient beings forever. Even though I have no idea about whether the current total welfare is positive/negative, I am more confident that future total welfare is positive. I expect intelligent beings to control an ever increasing fraction of the resources in the universe. I estimated the scale of wild animal welfare is 50.8 M times that of human welfare, but this ratio used to be orders of magnitude larger when there were only a few humans. Extrapolating how this ratio has evolved across time into the future suggests the welfare of the beings in control of the future (humans now, presumably digital beings in the future) will dominate. In addition, I expect intelligent beings like humans to have positive lives for the most part, so I am guessing the expected value of the future is positive.
Even if I though the expected value of the future was negative, I would not want to press the button. In this case, pressing the button would be good, as it would increase the value of the future from negative to neutral. However, I guess there would be actions available to me which could make the future positive, thus being better than just pressing the button. For example, conditional on me having the chance to press such button, I would likely have a super important position in the world government, so I could direct lots of resources towards investigating which beings are having positive and negative lives, and then painlessly eliminate or improve the negative ones to maximise total welfare.
As long as positive and negative experiences are being measured in the same unit, 1 unit of welfare plus 1 unit of suffering cancel out.
I think so, as I strongly endorse the tota view. Yet, there are physical limits. If the amount of suffering is sufficiently large, there may not be enough energy in the universe to produce enough happiness to outweigh it.
If there was no realistic way of stopping the widespread torture apart from killing everyone involved, I would be happy with killing all humans. However, I do not think it would be good to kill all beings, as I think wild animals have good lives, although I am quite uncertain.
In which sense do you think there is an asymmetry? As I said above, I think 1 unit of welfare plus 1 unit of suffering cancel out. However, I think it is quite possible that the maximum amount of suffering Smax which can be produced with a certain amount of energy exceeds the maximum amount of happiness Hmax which can be produced with the same energy. On the other hand, I think the opposite is also possible, so I am guessing Smax = Hmax (relatedly), although the total view does not require this.
In the 1st sentence above, I think you are saying that “arbitrarily large amount of happiness”*”value of happiness” ⇐ “some amount of extreme suffering”*”disvalue of extreme suffering”, i.e. “value of happiness” ⇐ “some amount of extreme suffering”*”disvalue of extreme suffering”/”arbitrarily large amount of happiness”. This inequality tends to “value of happiness” ⇐ 0 as “arbitrarily large amount of happiness” goes to infinity, and by definition “value of happiness” >= 0 (otherwise it would not be happiness, but suffering). So I believe your 1st sentence implies “value of happiness” = 0. In other words, I would say you are valuing happiness the same as non-existence. In this case, having maximally happy beings would be as valuable as non-existence. So painlessly eliminating all beings forever by pressing the button would be optimal, in the sense there is no action which would produce more value.
Of course, I personally do not think it makes any sense to value happiness and non-existence the same. I assume most people would have the same view on reflection.
On asymmetry—and indeed most of the points I’m trying to make—Magnus Vinding gives better explanations than I could. On asymmetry specifically I’d recommend: https://centerforreducingsuffering.org/research/suffering-and-happiness-morally-symmetric-or-orthogonal/
and on whether positive can outweigh suffering: https://centerforreducingsuffering.org/research/on-purported-positive-goods-outweighing-suffering/
To get a better understanding of these points, I highly recommend his book ‘Suffering-focused ethics’ - it is the most compelling thing I’ve read on these topics.
I think—probably about 90% sure rather than 100% - I agree that happiness is preferable to non-existence. However, I don’t think there’s an urgency/moral imperative to act to create happiness over neutral states in the same way that there is an urgency and moral imperative to reduce suffering. I.e. I think it’s much more important to spend the world’s resources reducing suffering (taking people from a position of suffering to a position of neutral needs met/not in suffering) than to spend resources on boosting people from a neutral needs met state (which needn’t be non-existence) to a heightened ‘happiness’ state.
I view that both: the value difference between neutral and suffering is much larger than the value difference between neutral and happiness AND that there is a moral imperative to reduce suffering where there isn’t necessarily a moral imperative to increase happiness.
To give an example, if presented with the option to either give someone a paracetamol for a mild headache or to give someone a bit of cake that they would enjoy (but do not need—they are not in famine/hunger), I would always choose the painkiller. And—perhaps I’m wrong—I think this would be quite a common preference in the general population. I think most people on a case by case basis would make statements that indicate they do believe we should prioritise suffering. Yet, when we talk on aggregate, suffering-prioritisation seems to be less prevalent. It reminds me of some of the examples in the Frames and Reality chapter of Thinking Fast and Slow about how people will respond the essentially the same scenario differently depending on it’s framing.
WIth apologies for getting a bit dark - (with the possible exclusion of sociopaths etc.), I think people in general would agree they would refuse an ice-cream or the joy of being on a rollercoaster if the cost of it was that someone would be tortured or raped. My point is that I can’t think of any amount of positive/happiness that I would be willing to say yes, this extra happiness for me balances out someone else being raped. So there are at least some examples of suffering, that I just don’t think can be offset by any amount of happiness and therefore my viewpoint definitely includes asymmetry between happiness and suffering. Morally, I just don’t think I can accept a view that says some amount of happiness can offset someone else’s rape or torture.
And I am concerned that the views of people who have experience significant suffering are very under-represented and we don’t think about their viewpoints because it’s easier not to and they often don’t have a platform. What proportion of people working in population ethics have experienced destitution or been a severe burns victim? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?
Hi, sorry if I’m a bit late here, and I don’t want to be repeating myself too much, but since I feel it was not properly understood, one of the main points I originally made in this thread and I want to really hit home is that happiness as measured while in a state of happiness cannot be compared in any way to non-existence as “measured” in a state of non-existence, since we obviously cannot perceive sensations (or literally anything) when dead/not in existence. So the common intuition that happiness is preferable to non-existence is based upon our shallow understanding of what it is to “be” dead/non-existant, but from a rational point of view this idea simply does not hold. If I was being tortured with no way out, I would certainly want to die as quickly as I could, however when I imagine death in that moment, I am imagining (while in the state of suffering, and not in the “state” of death) a cessation of that suffering. However, to experience such a cessation I must be able to experience something to which I can compare against said experience of suffering. So technically speaking it doesn’t make any sense at all to say that happiness/suffering is better than non-existence as measured in the respective states of happiness/suffering and death/non-existence. It’s
And it’s not like death/non-existence is neutral in this case. If you picture a scale, with positive experiences (e.g. happiness/satisfaction) in the positive direction and negative experiences (e.g. pain/suffering) in the negative direction, death does NOT appear at 0 since what we are measuring is the perceived value of the experiences. Put another way in terms of utility functions, if someone has a utility function at some value, and then they die, rather than immediately going to zero, their utility function immediately ceases to exist, as a utility function must belong to someone.
Also this idea of mine is somewhat new to me (a few months old maybe), so I haven’t thought through many implications and edge-cases too thoroughly (yet). However this idea, however difficult for me to wrestle with, is something which I find myself simply unable to reason out of.