Operations Generalist at Anthropic & former President of UChicago Effective Altruism. Suffering-focused.
Currently testing fit for operations. Tentative long-term plan involves AI safety field-building or longtermist movement-building, particularly around s-risks and suffering-focused ethics.
Cause priorities: suffering-focused ethics, s-risks, meta-EA, AI safety, wild animal welfare, moral circle expansion.
Thank you for this—I found it at least as useful as Luisa’s (fantastic) post. : )
I teared up reading this, mostly because I felt really validated in how I’ve slowly been tackling my imposter syndrome (getting feedback, reminding myself not to focus on comparisons, focusing on better mapping the world and not making useless value judgments). I also happen to think that you are a wonderful member of the EA community, who is doing good work with the Forum, so this nudges me towards thinking that if really cool people feel this way, maybe I can be a really cool person too!