I’m on an (unintended) Gap Year at the moment and will study maths at university next year. Right now I’m exploring cause prioritisation.
Previously I was focused on Nuclear War, but I no longer think it’s worth me working on, because it’s very intractable and the extinction risk is very low. I’ve also explored AI Safety (doing the AI Safety Fundamentals Course) but my coding really isn’t up to scratch at the moment.
The main thing I’m focusing on right now is cause prioritisation—I’m still quite sceptical of the theory of working on extinction risks.
Things I’ve done:
Non Trivial Fellowship. I produced an explainer of the risks posed by improved precision in nuclear warfare.
AI Safety Fundamentals. I produced this explainer of superposition: https://chrisclay.substack.com/p/what-is-superposition-in-neural-networks
I see the argument about the US Government’s statistical value of a life used a lot—and I’m not sure if I agree. I don’t think it echoes public sentiment—rather a government’s desire to remove itself of blame. Note how much more is spent per life on say, air transport than disease prevention.