I’m not sure that adding impaired/unproductive people would counterfactually reduce others—if a person with a disability refrains from having a child, that doesn’t mean that some healthy person elsewhere has an extra child.
Re being happy to be alive, I kind of want to distinguish ‘being unhappy with one’s life’ and ‘being happy to be alive’. I think you can have net-negative wellbeing and broadly think your life sucks, but still not sincerely want to die, or wish you’d never been born. This hunch is mainly based on my own experience: I’ve had times in my life where I think my wellbeing was net-negative, but I still didn’t wish I hadn’t been born. Basically I have a sense that there’s a value to my life that’s not straightforwardly related to my wellbeing.
if a person with a disability refrains from having a child, that doesn’t mean that some healthy person elsewhere has an extra child.
It means there are fewer resources to go around, which fractionally disincentivises ~8 billion people from the expensive act of reproduction.
I think you can have net-negative wellbeing and broadly think your life sucks, but still not sincerely want to die, or wish you’d never been born.
This claim makes strong philosophical assumptions. One could question what it even means to either ‘not wish you’d never been born’ or to ‘not want to die when’ when your wellbeing is negative.
One could also claim on a hedonic view that, whatever it means to want not to die, having net-negative wellbeing is the salient point and in an ideal world you would painlessly stop existing. This sounds controversial for humans, but we do it all the time with our pets: throughout their lives, they will fight for survival if put in a threatening state, but if we think they’re suffering too much we will override their desires and take them for one last visit to the vet.
One could question what it even means to either ‘not wish you’d never been born’ or to ‘not want to die when’ when your wellbeing is negative.
One could also claim on a hedonic view that, whatever it means to want not to die, having net-negative wellbeing is the salient point and in an ideal world you would painlessly stop existing.
Given that the lived experience of some (most?) of the people who live lives full of suffering is different from tha model, this suggests that the model is just wrong.
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
I don’t want to give too much detail on a public forum, but I myself am also an example of how this model fails miserably.
What do you mean ‘the model is wrong’? You seem to be confusing functions (morality) with parameters (epistemics).
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
It’s also necessary if you want your functions to be quantitative. Maybe you don’t, but then the whole edifice of EA becomes extremely hard to justify.
If the phrase “Most people have net-positive utility” is rephrased as “most people don’t actively want to not exist” it sounds totally unsurprising, and not nearly as positive as the original sentence. Moreover, it doesn’t seem to be the definition most utilitarians use: For example, “It’s okay to create people as long as they will have net positive utility” would lose all intuitive support if transformed into “It’s okay to create people as long as they won’t actively want to not exist, even if their lives are filled with suffering”.
I’m inclined to consider it far more counterintuitive to think ‘if this person experiences overall slightly more negative than positive affect, but very much wants to live, and find their life meaningful, and I painlessly murder them in their sleep, then I have done them a favor’, which is what a purely hedonist account of individual well-being implies. (Note that this is about what is good for them, not what you morally ought to do, so standard utilitarian stuff about why actually murdering people will nearly always decrease overall utility across all people is true but irrelevant.)
I’m not sure that adding impaired/unproductive people would counterfactually reduce others—if a person with a disability refrains from having a child, that doesn’t mean that some healthy person elsewhere has an extra child.
Re being happy to be alive, I kind of want to distinguish ‘being unhappy with one’s life’ and ‘being happy to be alive’. I think you can have net-negative wellbeing and broadly think your life sucks, but still not sincerely want to die, or wish you’d never been born. This hunch is mainly based on my own experience: I’ve had times in my life where I think my wellbeing was net-negative, but I still didn’t wish I hadn’t been born. Basically I have a sense that there’s a value to my life that’s not straightforwardly related to my wellbeing.
It means there are fewer resources to go around, which fractionally disincentivises ~8 billion people from the expensive act of reproduction.
This claim makes strong philosophical assumptions. One could question what it even means to either ‘not wish you’d never been born’ or to ‘not want to die when’ when your wellbeing is negative.
One could also claim on a hedonic view that, whatever it means to want not to die, having net-negative wellbeing is the salient point and in an ideal world you would painlessly stop existing. This sounds controversial for humans, but we do it all the time with our pets: throughout their lives, they will fight for survival if put in a threatening state, but if we think they’re suffering too much we will override their desires and take them for one last visit to the vet.
Given that the lived experience of some (most?) of the people who live lives full of suffering is different from tha model, this suggests that the model is just wrong.
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
I don’t want to give too much detail on a public forum, but I myself am also an example of how this model fails miserably.
What do you mean ‘the model is wrong’? You seem to be confusing functions (morality) with parameters (epistemics).
It’s also necessary if you want your functions to be quantitative. Maybe you don’t, but then the whole edifice of EA becomes extremely hard to justify.
If the phrase “Most people have net-positive utility” is rephrased as “most people don’t actively want to not exist” it sounds totally unsurprising, and not nearly as positive as the original sentence. Moreover, it doesn’t seem to be the definition most utilitarians use: For example, “It’s okay to create people as long as they will have net positive utility” would lose all intuitive support if transformed into “It’s okay to create people as long as they won’t actively want to not exist, even if their lives are filled with suffering”.
I’m inclined to consider it far more counterintuitive to think ‘if this person experiences overall slightly more negative than positive affect, but very much wants to live, and find their life meaningful, and I painlessly murder them in their sleep, then I have done them a favor’, which is what a purely hedonist account of individual well-being implies. (Note that this is about what is good for them, not what you morally ought to do, so standard utilitarian stuff about why actually murdering people will nearly always decrease overall utility across all people is true but irrelevant.)