I am very reluctant to draw inferences on preferences from instinctive behaviour
Fair! I would say instinctive behaviour could provide a prior for what beings want, but that we should remain open to going against them given enough evidence. I have complained about Our World in Dataâs implicitly assuming that nature conservation is good.
If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/âlevels of suffering.
Agreed. For what it is worth, I estimated 6.37 % of people have negative lives. This is one reason I prefer using WELLBYs instead of DALYs/âQALYs, which assume lives are always positive.
Is it really so black and white as we definitely shouldnât press that hypothetical buttonâand if it is, what are the implications of that?
It is quite clear to me I should not painlessly eliminate all sentient beings forever. Even though I have no idea about whether the current total welfare is positive/ânegative, I am more confident that future total welfare is positive. I expect intelligent beings to control an ever increasing fraction of the resources in the universe. I estimated the scale of wild animal welfare is 50.8 M times that of human welfare, but this ratio used to be orders of magnitude larger when there were only a few humans. Extrapolating how this ratio has evolved across time into the future suggests the welfare of the beings in control of the future (humans now, presumably digital beings in the future) will dominate. In addition, I expect intelligent beings like humans to have positive lives for the most part, so I am guessing the expected value of the future is positive.
Even if I though the expected value of the future was negative, I would not want to press the button. In this case, pressing the button would be good, as it would increase the value of the future from negative to neutral. However, I guess there would be actions available to me which could make the future positive, thus being better than just pressing the button. For example, conditional on me having the chance to press such button, I would likely have a super important position in the world government, so I could direct lots of resources towards investigating which beings are having positive and negative lives, and then painlessly eliminate or improve the negative ones to maximise total welfare.
We value positive experiences more than we disvalue suffering?
As long as positive and negative experiences are being measured in the same unit, 1 unit of welfare plus 1 unit of suffering cancel out.
We think some level of happiness can justify or balance out extreme suffering?
I think so, as I strongly endorse the tota view. Yet, there are physical limits. If the amount of suffering is sufficiently large, there may not be enough energy in the universe to produce enough happiness to outweigh it.
Whatâs the tipping pointâif every being on Earth was being endlessly tortured, should we push the button?
If there was no realistic way of stopping the widespread torture apart from killing everyone involved, I would be happy with killing all humans. However, I do not think it would be good to kill all beings, as I think wild animals have good lives, although I am quite uncertain.
I do think thereâs an asymmetry between suffering and happiness/âpositive wellbeing
In which sense do you think there is an asymmetry? As I said above, I think 1 unit of welfare plus 1 unit of suffering cancel out. However, I think it is quite possible that the maximum amount of suffering Smax which can be produced with a certain amount of energy exceeds the maximum amount of happiness Hmax which can be produced with the same energy. On the other hand, I think the opposite is also possible, so I am guessing Smax = Hmax (relatedly), although the total view does not require this.
With that in mind I really donât think that there is any level of human satisfaction that I would be comfortable saying âthis volume of human joy/âpositive wellbeing is worth/âjustifies the continuation of one human being subject to extreme tortureâ. If thatâs the case, can I really say itâs the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering?
In the 1st sentence above, I think you are saying that âarbitrarily large amount of happinessâ*âvalue of happinessâ â âsome amount of extreme sufferingâ*âdisvalue of extreme sufferingâ, i.e. âvalue of happinessâ â âsome amount of extreme sufferingâ*âdisvalue of extreme sufferingâ/ââarbitrarily large amount of happinessâ. This inequality tends to âvalue of happinessâ â 0 as âarbitrarily large amount of happinessâ goes to infinity, and by definition âvalue of happinessâ >= 0 (otherwise it would not be happiness, but suffering). So I believe your 1st sentence implies âvalue of happinessâ = 0. In other words, I would say you are valuing happiness the same as non-existence. In this case, having maximally happy beings would be as valuable as non-existence. So painlessly eliminating all beings forever by pressing the button would be optimal, in the sense there is no action which would produce more value.
Of course, I personally do not think it makes any sense to value happiness and non-existence the same. I assume most people would have the same view on reflection.
I thinkâprobably about 90% sure rather than 100% - I agree that happiness is preferable to non-existence. However, I donât think thereâs an urgency/âmoral imperative to act to create happiness over neutral states in the same way that there is an urgency and moral imperative to reduce suffering. I.e. I think itâs much more important to spend the worldâs resources reducing suffering (taking people from a position of suffering to a position of neutral needs met/ânot in suffering) than to spend resources on boosting people from a neutral needs met state (which neednât be non-existence) to a heightened âhappinessâ state. I view that both: the value difference between neutral and suffering is much larger than the value difference between neutral and happiness AND that there is a moral imperative to reduce suffering where there isnât necessarily a moral imperative to increase happiness.
To give an example, if presented with the option to either give someone a paracetamol for a mild headache or to give someone a bit of cake that they would enjoy (but do not needâthey are not in famine/âhunger), I would always choose the painkiller. Andâperhaps Iâm wrongâI think this would be quite a common preference in the general population. I think most people on a case by case basis would make statements that indicate they do believe we should prioritise suffering. Yet, when we talk on aggregate, suffering-prioritisation seems to be less prevalent. It reminds me of some of the examples in the Frames and Reality chapter of Thinking Fast and Slow about how people will respond the essentially the same scenario differently depending on itâs framing.
WIth apologies for getting a bit dark - (with the possible exclusion of sociopaths etc.), I think people in general would agree they would refuse an ice-cream or the joy of being on a rollercoaster if the cost of it was that someone would be tortured or raped. My point is that I canât think of any amount of positive/âhappiness that I would be willing to say yes, this extra happiness for me balances out someone else being raped. So there are at least some examples of suffering, that I just donât think can be offset by any amount of happiness and therefore my viewpoint definitely includes asymmetry between happiness and suffering. Morally, I just donât think I can accept a view that says some amount of happiness can offset someone elseâs rape or torture.
And I am concerned that the views of people who have experience significant suffering are very under-represented and we donât think about their viewpoints because itâs easier not to and they often donât have a platform. What proportion of people working in population ethics have experienced destitution or been a severe burns victim? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?
Hi, sorry if Iâm a bit late here, and I donât want to be repeating myself too much, but since I feel it was not properly understood, one of the main points I originally made in this thread and I want to really hit home is that happiness as measured while in a state of happiness cannot be compared in any way to non-existence as âmeasuredâ in a state of non-existence, since we obviously cannot perceive sensations (or literally anything) when dead/ânot in existence. So the common intuition that happiness is preferable to non-existence is based upon our shallow understanding of what it is to âbeâ dead/ânon-existant, but from a rational point of view this idea simply does not hold. If I was being tortured with no way out, I would certainly want to die as quickly as I could, however when I imagine death in that moment, I am imagining (while in the state of suffering, and not in the âstateâ of death) a cessation of that suffering. However, to experience such a cessation I must be able to experience something to which I can compare against said experience of suffering. So technically speaking it doesnât make any sense at all to say that happiness/âsuffering is better than non-existence as measured in the respective states of happiness/âsuffering and death/ânon-existence. Itâs
And itâs not like death/ânon-existence is neutral in this case. If you picture a scale, with positive experiences (e.g. happiness/âsatisfaction) in the positive direction and negative experiences (e.g. pain/âsuffering) in the negative direction, death does NOT appear at 0 since what we are measuring is the perceived value of the experiences. Put another way in terms of utility functions, if someone has a utility function at some value, and then they die, rather than immediately going to zero, their utility function immediately ceases to exist, as a utility function must belong to someone.
Also this idea of mine is somewhat new to me (a few months old maybe), so I havenât thought through many implications and edge-cases too thoroughly (yet). However this idea, however difficult for me to wrestle with, is something which I find myself simply unable to reason out of.
Thanks for clarifying too! Strongly upvoted.
Fair! I would say instinctive behaviour could provide a prior for what beings want, but that we should remain open to going against them given enough evidence. I have complained about Our World in Dataâs implicitly assuming that nature conservation is good.
Agreed. For what it is worth, I estimated 6.37 % of people have negative lives. This is one reason I prefer using WELLBYs instead of DALYs/âQALYs, which assume lives are always positive.
It is quite clear to me I should not painlessly eliminate all sentient beings forever. Even though I have no idea about whether the current total welfare is positive/ânegative, I am more confident that future total welfare is positive. I expect intelligent beings to control an ever increasing fraction of the resources in the universe. I estimated the scale of wild animal welfare is 50.8 M times that of human welfare, but this ratio used to be orders of magnitude larger when there were only a few humans. Extrapolating how this ratio has evolved across time into the future suggests the welfare of the beings in control of the future (humans now, presumably digital beings in the future) will dominate. In addition, I expect intelligent beings like humans to have positive lives for the most part, so I am guessing the expected value of the future is positive.
Even if I though the expected value of the future was negative, I would not want to press the button. In this case, pressing the button would be good, as it would increase the value of the future from negative to neutral. However, I guess there would be actions available to me which could make the future positive, thus being better than just pressing the button. For example, conditional on me having the chance to press such button, I would likely have a super important position in the world government, so I could direct lots of resources towards investigating which beings are having positive and negative lives, and then painlessly eliminate or improve the negative ones to maximise total welfare.
As long as positive and negative experiences are being measured in the same unit, 1 unit of welfare plus 1 unit of suffering cancel out.
I think so, as I strongly endorse the tota view. Yet, there are physical limits. If the amount of suffering is sufficiently large, there may not be enough energy in the universe to produce enough happiness to outweigh it.
If there was no realistic way of stopping the widespread torture apart from killing everyone involved, I would be happy with killing all humans. However, I do not think it would be good to kill all beings, as I think wild animals have good lives, although I am quite uncertain.
In which sense do you think there is an asymmetry? As I said above, I think 1 unit of welfare plus 1 unit of suffering cancel out. However, I think it is quite possible that the maximum amount of suffering Smax which can be produced with a certain amount of energy exceeds the maximum amount of happiness Hmax which can be produced with the same energy. On the other hand, I think the opposite is also possible, so I am guessing Smax = Hmax (relatedly), although the total view does not require this.
In the 1st sentence above, I think you are saying that âarbitrarily large amount of happinessâ*âvalue of happinessâ â âsome amount of extreme sufferingâ*âdisvalue of extreme sufferingâ, i.e. âvalue of happinessâ â âsome amount of extreme sufferingâ*âdisvalue of extreme sufferingâ/ââarbitrarily large amount of happinessâ. This inequality tends to âvalue of happinessâ â 0 as âarbitrarily large amount of happinessâ goes to infinity, and by definition âvalue of happinessâ >= 0 (otherwise it would not be happiness, but suffering). So I believe your 1st sentence implies âvalue of happinessâ = 0. In other words, I would say you are valuing happiness the same as non-existence. In this case, having maximally happy beings would be as valuable as non-existence. So painlessly eliminating all beings forever by pressing the button would be optimal, in the sense there is no action which would produce more value.
Of course, I personally do not think it makes any sense to value happiness and non-existence the same. I assume most people would have the same view on reflection.
On asymmetryâand indeed most of the points Iâm trying to makeâMagnus Vinding gives better explanations than I could. On asymmetry specifically Iâd recommend: https://ââcenterforreducingsuffering.org/ââresearch/ââsuffering-and-happiness-morally-symmetric-or-orthogonal/ââ
and on whether positive can outweigh suffering: https://ââcenterforreducingsuffering.org/ââresearch/ââon-purported-positive-goods-outweighing-suffering/ââ
To get a better understanding of these points, I highly recommend his book âSuffering-focused ethicsâ - it is the most compelling thing Iâve read on these topics.
I thinkâprobably about 90% sure rather than 100% - I agree that happiness is preferable to non-existence. However, I donât think thereâs an urgency/âmoral imperative to act to create happiness over neutral states in the same way that there is an urgency and moral imperative to reduce suffering. I.e. I think itâs much more important to spend the worldâs resources reducing suffering (taking people from a position of suffering to a position of neutral needs met/ânot in suffering) than to spend resources on boosting people from a neutral needs met state (which neednât be non-existence) to a heightened âhappinessâ state.
I view that both: the value difference between neutral and suffering is much larger than the value difference between neutral and happiness AND that there is a moral imperative to reduce suffering where there isnât necessarily a moral imperative to increase happiness.
To give an example, if presented with the option to either give someone a paracetamol for a mild headache or to give someone a bit of cake that they would enjoy (but do not needâthey are not in famine/âhunger), I would always choose the painkiller. Andâperhaps Iâm wrongâI think this would be quite a common preference in the general population. I think most people on a case by case basis would make statements that indicate they do believe we should prioritise suffering. Yet, when we talk on aggregate, suffering-prioritisation seems to be less prevalent. It reminds me of some of the examples in the Frames and Reality chapter of Thinking Fast and Slow about how people will respond the essentially the same scenario differently depending on itâs framing.
WIth apologies for getting a bit dark - (with the possible exclusion of sociopaths etc.), I think people in general would agree they would refuse an ice-cream or the joy of being on a rollercoaster if the cost of it was that someone would be tortured or raped. My point is that I canât think of any amount of positive/âhappiness that I would be willing to say yes, this extra happiness for me balances out someone else being raped. So there are at least some examples of suffering, that I just donât think can be offset by any amount of happiness and therefore my viewpoint definitely includes asymmetry between happiness and suffering. Morally, I just donât think I can accept a view that says some amount of happiness can offset someone elseâs rape or torture.
And I am concerned that the views of people who have experience significant suffering are very under-represented and we donât think about their viewpoints because itâs easier not to and they often donât have a platform. What proportion of people working in population ethics have experienced destitution or been a severe burns victim? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?
Hi, sorry if Iâm a bit late here, and I donât want to be repeating myself too much, but since I feel it was not properly understood, one of the main points I originally made in this thread and I want to really hit home is that happiness as measured while in a state of happiness cannot be compared in any way to non-existence as âmeasuredâ in a state of non-existence, since we obviously cannot perceive sensations (or literally anything) when dead/ânot in existence. So the common intuition that happiness is preferable to non-existence is based upon our shallow understanding of what it is to âbeâ dead/ânon-existant, but from a rational point of view this idea simply does not hold. If I was being tortured with no way out, I would certainly want to die as quickly as I could, however when I imagine death in that moment, I am imagining (while in the state of suffering, and not in the âstateâ of death) a cessation of that suffering. However, to experience such a cessation I must be able to experience something to which I can compare against said experience of suffering. So technically speaking it doesnât make any sense at all to say that happiness/âsuffering is better than non-existence as measured in the respective states of happiness/âsuffering and death/ânon-existence. Itâs
And itâs not like death/ânon-existence is neutral in this case. If you picture a scale, with positive experiences (e.g. happiness/âsatisfaction) in the positive direction and negative experiences (e.g. pain/âsuffering) in the negative direction, death does NOT appear at 0 since what we are measuring is the perceived value of the experiences. Put another way in terms of utility functions, if someone has a utility function at some value, and then they die, rather than immediately going to zero, their utility function immediately ceases to exist, as a utility function must belong to someone.
Also this idea of mine is somewhat new to me (a few months old maybe), so I havenât thought through many implications and edge-cases too thoroughly (yet). However this idea, however difficult for me to wrestle with, is something which I find myself simply unable to reason out of.