Formerly a software engineer at Google, now I’m doing independent AI alignment research.
Because of my focus on AI alignment, I tend to post more on LessWrong and AI Alignment Forum than I do here.
I’m always happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!
That is high-value work. Holden Karnofsky’s list of “important, actionable research questions” about AI alignment and strategy includes one about figuring out what should be done in deployment of advanced AI and leading up to it (1):