EA as a whole tends to maximize welfare (and you can see relevant discussion in the proposed definition of EA here). While suffering and well-being are possibly not simply opposites, something which I’m currently trying to understand, the analyses are arguably similar with the tools we have today. So ACE and GiveWell should be pretty safe bets.
EA as a whole tends to maximize welfare (and you can see relevant discussion in the proposed definition of EA here). While suffering and well-being are possibly not simply opposites, something which I’m currently trying to understand, the analyses are arguably similar with the tools we have today. So ACE and GiveWell should be pretty safe bets.
Thinking about the long term, the Center on Long Term Risk is working with a suffering-focused ethics approach. This view can result in different cause prioritization.