What do you mean ‘the model is wrong’? You seem to be confusing functions (morality) with parameters (epistemics).
The idea of modeling people as having a single utility that can be negative and thus make their lives “not worth living” is way too simplistic.
It’s also necessary if you want your functions to be quantitative. Maybe you don’t, but then the whole edifice of EA becomes extremely hard to justify.
What do you mean ‘the model is wrong’? You seem to be confusing functions (morality) with parameters (epistemics).
It’s also necessary if you want your functions to be quantitative. Maybe you don’t, but then the whole edifice of EA becomes extremely hard to justify.