Currently researching how cost-benefit analysis is used in US regulatory decision-making and what this might imply for the regulation of Frontier AI. Supervised by John Halstead (GovAI).
In the past, I’ve done community building and operations at GovAI, CEA, and the SERI ML Alignment Theory Scholars program. My degree is in Computer Science.
I also sometimes worry about the big-picture epistemics of EA à la “Is EA just an ideology like any other?”.
I agree with you, being “a highly cool and well networked EA” and “do things which need to be done” are different goals. This post is heavily influenced by my experience as a new community builder and my perception that, in this situation, being “a highly cool and well networked EA” and “do things which need to be done” are pretty similar. If I wasn’t so sociable and network-y, I’d probably still be running my EA reading group with ~6 participants, which is nice but not “doing things which need to be done”. For technical alignment researchers, this is probably less the case, though still much more than I would’ve expected.