Operations Generalist at Anthropic & former President of UChicago Effective Altruism. Suffering-focused.
Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.
Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.
The principles have given me a great framework to think about things, and a community to motivate me to keep being rigorous about it!
However, I’ve become a lot more worried about relying on consequentialist calculations and I’ve ended up being a bit more virtues ethics-y as a result. I am not sure I’m not good for the world—it’s possible if I tried to have less impact I would have more confidence I’m not net negative? The perils of working in AI!