Undergraduate and EA organizer at Williams. Prospective longtermist researcher. AI Impacts intern June–September 2022. Elections junkie.
In the rest of 2022, I’m in Berkeley (June–September) and Oxford (September–). Please send me a private message if you’re nearby or if we might have overlapping interests!
Some things I’d be excited to talk about:
What happens after an intelligence explosion
What happens if most people appreciate AI
International relations in the context of powerful AI
Policy responses to AI — what’s likely to happen and what would be good
I would replace “avoiding x-risk” with “avoiding stuff like extinction” in this question. SBF’s usage is nonstandard—an existential catastrophe is typically defined as something that causes us to be able to achieve at most a small fraction of our potential. Events which cause us to achieve only 10^-30 of our potential are an existential catastrophe.
Regardless, I’m not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).