I do worry about future animal suffering. It’s partly for that reason that I’m less concerned about reducing risks of extinction than I am about reducing other existential risks that will result in large amounts of suffering in the future. This informed some of my choices of interventions for which I am ‘not clueless about’. E.g.
Technical AI alignment / AI governance and coordination research: it has been suggested that misaligned AI could be a significant s-risk.
Expanding our moral circle: relevance to future suffering should be obvious.
Global priorities research: this just seems robustly good as how can increasing moral understanding be bad?
Research into consciousness: seems really important in light of the potential risk of future digital minds suffering.
Research into improving mental health: improving mental health has intrinsic worth and I don’t see a clear link to increasing future suffering (in fact I lean towards thinking happier people/societies are less likely to act in morally outrageous ways).
I do lean towards thinking reducing extinction risk is net positive in expectation too, but I am quite uncertain about this and I don’t let it motivate my personal altruistic choices.
I do worry about future animal suffering. It’s partly for that reason that I’m less concerned about reducing risks of extinction than I am about reducing other existential risks that will result in large amounts of suffering in the future. This informed some of my choices of interventions for which I am ‘not clueless about’. E.g.
Technical AI alignment / AI governance and coordination research: it has been suggested that misaligned AI could be a significant s-risk.
Expanding our moral circle: relevance to future suffering should be obvious.
Global priorities research: this just seems robustly good as how can increasing moral understanding be bad?
Research into consciousness: seems really important in light of the potential risk of future digital minds suffering.
Research into improving mental health: improving mental health has intrinsic worth and I don’t see a clear link to increasing future suffering (in fact I lean towards thinking happier people/societies are less likely to act in morally outrageous ways).
I do lean towards thinking reducing extinction risk is net positive in expectation too, but I am quite uncertain about this and I don’t let it motivate my personal altruistic choices.