One approach would be to say the curve represents the instrumental effects of humanity on intrinsic value of all beings at that time. This might work, though does have some surprising effects, such as that even after our extinction, the trajectory might not stay at zero, and different trajectories could have different behaviour after our extinction.
This seems very natural to me and I’d like us to normalise including non-human animal wellbeing, and indeed the wellbeing of any other sentience, together with human wellbeing in analyses such as these.
We should use a different term than “humanity”. I’m not sure what the best choice is, perhaps “Sentientity” or “Sentientkind”.
On your last point, I really like “sentientkind”, but the one main time I used it (when brainstorming org names) I received feedback from a couple of non-EAs that sentientkind sounds a bit weird and sci-fi and thus might not be the best term. (I’ve not managed to come up with a better alternative for full-moral-circle-analogue-to-“humankind”, though.)
This seems very natural to me and I’d like us to normalise including non-human animal wellbeing, and indeed the wellbeing of any other sentience, together with human wellbeing in analyses such as these.
We should use a different term than “humanity”. I’m not sure what the best choice is, perhaps “Sentientity” or “Sentientkind”.
On your last point, I really like “sentientkind”, but the one main time I used it (when brainstorming org names) I received feedback from a couple of non-EAs that sentientkind sounds a bit weird and sci-fi and thus might not be the best term. (I’ve not managed to come up with a better alternative for full-moral-circle-analogue-to-“humankind”, though.)