Operations Generalist at Anthropic & former President of UChicago Effective Altruism. Suffering-focused.
Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.
Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.
Thanks a lot for your work on this neglected topic!
You mention,
Could you give more detail on which of the counter-considerations (and motivations) you consider strongest?