Yeah, I’ve lately been considering just three options for moral weights: ‘humans only’, ‘including animals’, and ‘longtermist’, with the first two being implicitly neartermist.
It seems like we don’t need ‘longtermist with humans only’ and ‘longtermist including animals’ because if things go well the bulk of the beings that exist in the long run will be morally relevant (if they weren’t we would have replaced them with more morally relevant beings).
Agreed. I guess my intuition is that using WALYs for humans+animals (scaled for brain complexity), humans only, and longtermist beings will be a decent enough approximation for maybe 80% of EAs and over 90% of the general public. Not that it’s the ideal metric for these people, but good enough that they’d treat the results as pretty important if they knew the calculations were done well.
Linch, that sounds like a reasonable approach. I think something like that could work.
Ultimately, I guess a high value per dollarwould then be assigned to anyone who donated ~‘maximally our best guess impactfully’ in either of the three categories,
… and then the value would scale ~linearly with the amount donated ‘max-impactfully’ to either of the three categories.
Might be somewhat difficult to explain this to a smart popular audience, but I suspect it might be doable.
My suspicion is that there will only be a very narrow and “lucky” range of moral and belief parameters where the three cause areas will have cost effectivenesses in the same orders of magnitude.
Yeah, I’ve lately been considering just three options for moral weights: ‘humans only’, ‘including animals’, and ‘longtermist’, with the first two being implicitly neartermist.
It seems like we don’t need ‘longtermist with humans only’ and ‘longtermist including animals’ because if things go well the bulk of the beings that exist in the long run will be morally relevant (if they weren’t we would have replaced them with more morally relevant beings).
but even within ‘humans only’ (say, weighted by ‘probability of existing’ … or only those sure to exist).
There are still difficult moral parameters, such as:
suffering vs happiness
death of a being with a strong identity vs suffering
death of babies vs children vs adults
(Similar questions ‘within animals’ too).
Agreed. I guess my intuition is that using WALYs for humans+animals (scaled for brain complexity), humans only, and longtermist beings will be a decent enough approximation for maybe 80% of EAs and over 90% of the general public. Not that it’s the ideal metric for these people, but good enough that they’d treat the results as pretty important if they knew the calculations were done well.
Do you mean all three separately (humans, animals, potential people) or trying to combine them in the same rating?
My impression was that separate could work but combining them ‘one of the three will overwhelm the others’.
If you do a linear weighting, this is expected. But one approach to worldview diversification is that you can normalize.
Linch, that sounds like a reasonable approach. I think something like that could work.
Ultimately, I guess a high value per dollarwould then be assigned to anyone who donated ~‘maximally our best guess impactfully’ in either of the three categories,
… and then the value would scale ~linearly with the amount donated ‘max-impactfully’ to either of the three categories.
Might be somewhat difficult to explain this to a smart popular audience, but I suspect it might be doable.
My suspicion is that there will only be a very narrow and “lucky” range of moral and belief parameters where the three cause areas will have cost effectivenesses in the same orders of magnitude.
But I should dig into this.