[i have the value that] almost all lives, even highly unpleasant ones, are worth living, and that I tend to weigh moments of happiness much more than equivalent moments of suffering, as this avoids what I see as philosophically problematic implications such as suicide for chronically depressed people, or nuking the rainforest as a net positive intervention.
do you mean that you chose this position because it avoids those conclusions? if so:
then the process you used was to select some (of many possible) moral axioms which lead to the conclusion you like.
i don’t think that would mean the axiom is your true value.
but if choosing axioms, you could instead just follow the conclusions you like, using an axiom such as “my morality is just complex [because it’s godshatter]”.
separately, the axiom you chose introduced a new ‘problematic’ conclusion: that someone in a mechanized torture chamber, who will be there for two more years, (during which their emotional state will mostly only change between depression and physical-harm-induced agony—maybe there will also be occasional happiness, like if another animal tries to comfort them), and then die without experiencing anything else—should be kept alive (or be created) in that situation instead of ceased to exist (or not be created), when these are the only choices.
that’s definitely something the universe allows one to prefer, as all moral preferences are. i’m just pointing it out because i think maybe it will feel immoral to you too, and you said you chose axioms to avoid problematic or immoral-feeling things.
in case it doesn’t feel wrong/‘philosophically problematic’ now, would it have before, before you started using this axiom, and so before your moral intuitions crystallized around it?
almost all lives, even highly unpleasant ones, are worth living
as i am a moral anti-realist, i cannot argue against a statement of what one values. but on priors about humans, i am not sure if you would actually want the world to be arranged in a way which follows this value, if you fully understood what it entails. have you spent time imagining, or experiencing, what it is like to live a life of extreme suffering? what it is like for it to be so bad that you desperately prefer nonexistence to it?
now, such lives could still be considered ‘worth it’ overall if they eventually get better or otherwise are considered meaningful somehow. but a life of just or almost just that? are you sure about that? and does this imply you would prefer to create a billion people whose lives last forever and almost only consist of depression/physical agony, if the only alternative was for them not to exist and no one happier to exist in their place—and if it does imply that, are you sure about that also? (maybe these fall under what ‘almost all’ doesn’t include for you, but then you’d also consider the lives of animals in mechanized torture facilities negatively worth living.)
(sometimes when humans ask, “are you sure about endorsing that”, there’s a subtext of social pressure, or of more subtly invoking someone’s social conformity bias so they will conform ~on their own. i do not mean it that way, i really mean it only as prompting you to consider.)
As someone who has experienced severe depression and suicidal ideation, I do have at least some understanding of what it entails. It’s my own experience that biases me in the way I described. Admittedly, my life has gotten better since then, so it’s not the same thing as a life of just extreme suffering though.
I feel for them. I understand they made a decision in terrible pain, and can sympathize. To me it’s a tragedy.
But I, on an intellectual level think they made an very unfortunate mistake, made in a reasonable ignorance of complex truths that most people can’t be expected to know. And I admit I’m not certain I’m right about this either.
It’s my own experience that biases me in the way I described.
can you explain how?
i believe extreme suffering had the opposite effect on me, making me become a suffering-focused altruist. i don’t actually understand how it could make someone ~not disvalue suffering. (related: ‘small and vulnerable’).
(i mean, i have guesses about how that could happen: like, maybe ~not disvaluing it was the only way to mentally cope with the vast scale of it. living in a world one believes to be evil is hard; easier to not believe it’s evil, somehow; have heard this is a reason many new animal-suffering-boycotters find it hard to continue having an animal-caring worldview.
or, maybe experiencing that level of suffering caused a buddhist enlightenment like thing where you realized suffering isn’t real, or something. though, happiness wouldn’t be real either in that case. i’m actually adjacent to this view, but it sure feels real for the animals, and i would still like to make the world be good for those who believe in it.)
I guess, going through extensive suffering made me cherish the moments of relative happiness all the more, and my struggle to justify my continued existence led me to place value in existence itself, a kind of “life-affirming” view as a way to keep on going.
There were times during my suicidal ideation that I thought that the world might be better off without me, for instance that if I died, they could use my organs to be transplanted and save more lives than I could save by living, that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live.
To counter these ideas, I developed a nexus of other ideas about the meaning of life being about more than just happiness or lack thereof, that truth was also intrinsically important, that existence itself had some apparent value over non-existence.
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
I should also add, a part of why I consider the conclusions reached by a moral theory not aligning with my moral intuitions important, is that in psychology there are studies that show that for complex problems, intuition outperforms logical reasoning at getting the correct answer, so ensuring that the theory’s results are intuitive is in a sense, a check on validity.
If that’s not satisfactory, I can also offer two first principles based variants of Utilitarianism and hedonism that draw conclusions more similar to mine, namely Positive Utilitarianism and Creativism. Admittedly, these are just some ideas I had one day, and not something anyone else to my knowledge has advocated, but I offer them because they suggest to me that in the space of possible moralities, not all of them are so suffering focused.
I’m admittedly uncertain about how much to endorse such ideas, so I don’t try to spread them. Speaking of uncertainty, another possible justification for my position may well be uncertainty about the correct moral theory, and putting some credence on things like Deontology and Virtue Ethics, the former of which in Kantian form tends to care primarily about humans capable of reason, and the latter contains the virtue of loyalty, which may imply a kind of speciesism in favour of humans first, or a hierarchy of moral circles.
There’s the concept of a moral parliament that’s been discussed before. To simplify the decision procedure, I’d consider applying the principle of maximum entropy, aka the principle of indifference, that places an equal, uniform weight on each moral theory. If, we have three votes, one for Utilitarianism, one for Deontology, and one for Virtue Ethics, two out of the three (a majority) seem to advocate a degree of human-centrism.
I’ve also considered the thought experiment of whether I would be loyal to humanity, or betray humanity to a supposedly benevolent alien civilization. Even if assume the aliens were perfect Utilitarians, I would be hesitant to side with them.
I don’t expect any of these things to sway anyone else to change their mind, but hopefully you can understand why I have my rather eccentric and unorthodox views.
do you mean that you chose this position because it avoids those conclusions? if so:
then the process you used was to select some (of many possible) moral axioms which lead to the conclusion you like.
i don’t think that would mean the axiom is your true value.
but if choosing axioms, you could instead just follow the conclusions you like, using an axiom such as “my morality is just complex [because it’s godshatter]”.
separately, the axiom you chose introduced a new ‘problematic’ conclusion: that someone in a mechanized torture chamber, who will be there for two more years, (during which their emotional state will mostly only change between depression and physical-harm-induced agony—maybe there will also be occasional happiness, like if another animal tries to comfort them), and then die without experiencing anything else—should be kept alive (or be created) in that situation instead of ceased to exist (or not be created), when these are the only choices.
that’s definitely something the universe allows one to prefer, as all moral preferences are. i’m just pointing it out because i think maybe it will feel immoral to you too, and you said you chose axioms to avoid problematic or immoral-feeling things.
in case it doesn’t feel wrong/‘philosophically problematic’ now, would it have before, before you started using this axiom, and so before your moral intuitions crystallized around it?
as i am a moral anti-realist, i cannot argue against a statement of what one values. but on priors about humans, i am not sure if you would actually want the world to be arranged in a way which follows this value, if you fully understood what it entails. have you spent time imagining, or experiencing, what it is like to live a life of extreme suffering? what it is like for it to be so bad that you desperately prefer nonexistence to it?
now, such lives could still be considered ‘worth it’ overall if they eventually get better or otherwise are considered meaningful somehow. but a life of just or almost just that? are you sure about that? and does this imply you would prefer to create a billion people whose lives last forever and almost only consist of depression/physical agony, if the only alternative was for them not to exist and no one happier to exist in their place—and if it does imply that, are you sure about that also? (maybe these fall under what ‘almost all’ doesn’t include for you, but then you’d also consider the lives of animals in mechanized torture facilities negatively worth living.)
(sometimes when humans ask, “are you sure about endorsing that”, there’s a subtext of social pressure, or of more subtly invoking someone’s social conformity bias so they will conform ~on their own. i do not mean it that way, i really mean it only as prompting you to consider.)
As someone who has experienced severe depression and suicidal ideation, I do have at least some understanding of what it entails. It’s my own experience that biases me in the way I described. Admittedly, my life has gotten better since then, so it’s not the same thing as a life of just extreme suffering though.
What do you think about people who do go through with suicide? These people clearly thought their suffering outweighed any happiness they experienced.
I feel for them. I understand they made a decision in terrible pain, and can sympathize. To me it’s a tragedy.
But I, on an intellectual level think they made an very unfortunate mistake, made in a reasonable ignorance of complex truths that most people can’t be expected to know. And I admit I’m not certain I’m right about this either.
can you explain how?
i believe extreme suffering had the opposite effect on me, making me become a suffering-focused altruist. i don’t actually understand how it could make someone ~not disvalue suffering. (related: ‘small and vulnerable’).
(i mean, i have guesses about how that could happen: like, maybe ~not disvaluing it was the only way to mentally cope with the vast scale of it. living in a world one believes to be evil is hard; easier to not believe it’s evil, somehow; have heard this is a reason many new animal-suffering-boycotters find it hard to continue having an animal-caring worldview.
or, maybe experiencing that level of suffering caused a buddhist enlightenment like thing where you realized suffering isn’t real, or something. though, happiness wouldn’t be real either in that case. i’m actually adjacent to this view, but it sure feels real for the animals, and i would still like to make the world be good for those who believe in it.)
from your other comment:
it still feels mysterious / that comment seems more like ‘what you prefer and uncertainty’ than ‘why / what caused you to have those preferences’
I guess, going through extensive suffering made me cherish the moments of relative happiness all the more, and my struggle to justify my continued existence led me to place value in existence itself, a kind of “life-affirming” view as a way to keep on going.
There were times during my suicidal ideation that I thought that the world might be better off without me, for instance that if I died, they could use my organs to be transplanted and save more lives than I could save by living, that I was a burden and that the resources expended keeping me alive were better used on someone who actually wanted to live.
To counter these ideas, I developed a nexus of other ideas about the meaning of life being about more than just happiness or lack thereof, that truth was also intrinsically important, that existence itself had some apparent value over non-existence.
i see, thanks for explaining!
i’m modelling this as: basic drive to not die → selects values that are compatible with basic drive’s fulfillment.
i’ve been wondering if humans generally do something like this. (in particular to continue having values/cares after ontological crises like: losing belief in a god, or losing a close other who one was dedicated to protecting.)
in case anyone has similar thoughts: to have the level of altruism to even consider the question is extremely rare. there are probably far better things you can do, than just dying and donating; like earning to give, or direct research, or maybe some third thing you’ll come up with. (most generally, the two traits i think are needed for research are intelligence and creativity. this is a creative, unintuitive moral question to ask. and my perception is that altruism and intelligence correlate, but i could be wrong about that, or biased from mostly seeing EAs.)
Sorry for the delayed response.
This does seem like a good explanation of what happened. It does imply that I had motivated reasoning though, which probably casts some doubt on those values/beliefs being epistemically well grounded.
These words are very kind. Thank you.
I should also add, a part of why I consider the conclusions reached by a moral theory not aligning with my moral intuitions important, is that in psychology there are studies that show that for complex problems, intuition outperforms logical reasoning at getting the correct answer, so ensuring that the theory’s results are intuitive is in a sense, a check on validity.
If that’s not satisfactory, I can also offer two first principles based variants of Utilitarianism and hedonism that draw conclusions more similar to mine, namely Positive Utilitarianism and Creativism. Admittedly, these are just some ideas I had one day, and not something anyone else to my knowledge has advocated, but I offer them because they suggest to me that in the space of possible moralities, not all of them are so suffering focused.
I’m admittedly uncertain about how much to endorse such ideas, so I don’t try to spread them. Speaking of uncertainty, another possible justification for my position may well be uncertainty about the correct moral theory, and putting some credence on things like Deontology and Virtue Ethics, the former of which in Kantian form tends to care primarily about humans capable of reason, and the latter contains the virtue of loyalty, which may imply a kind of speciesism in favour of humans first, or a hierarchy of moral circles.
There’s the concept of a moral parliament that’s been discussed before. To simplify the decision procedure, I’d consider applying the principle of maximum entropy, aka the principle of indifference, that places an equal, uniform weight on each moral theory. If, we have three votes, one for Utilitarianism, one for Deontology, and one for Virtue Ethics, two out of the three (a majority) seem to advocate a degree of human-centrism.
I’ve also considered the thought experiment of whether I would be loyal to humanity, or betray humanity to a supposedly benevolent alien civilization. Even if assume the aliens were perfect Utilitarians, I would be hesitant to side with them.
I don’t expect any of these things to sway anyone else to change their mind, but hopefully you can understand why I have my rather eccentric and unorthodox views.