One could question what it even means to either ‘not wish you’d never been born’ or to ‘not want to die when’ when your wellbeing is negative.
One could also claim on a hedonic view that, whatever it means to want not to die, having net-negative wellbeing is the salient point and in an ideal world you would painlessly stop existing.
Given that the lived experience of some (most?) of the people who live lives full of suffering is different from tha model, this suggests that the model is just wrong.
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
I don’t want to give too much detail on a public forum, but I myself am also an example of how this model fails miserably.
What do you mean ‘the model is wrong’? You seem to be confusing functions (morality) with parameters (epistemics).
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
It’s also necessary if you want your functions to be quantitative. Maybe you don’t, but then the whole edifice of EA becomes extremely hard to justify.
Given that the lived experience of some (most?) of the people who live lives full of suffering is different from tha model, this suggests that the model is just wrong.
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
I don’t want to give too much detail on a public forum, but I myself am also an example of how this model fails miserably.
What do you mean ‘the model is wrong’? You seem to be confusing functions (morality) with parameters (epistemics).
It’s also necessary if you want your functions to be quantitative. Maybe you don’t, but then the whole edifice of EA becomes extremely hard to justify.