I’m on an (unintended) Gap Year at the moment and will study maths at university next year. Right now I’m exploring cause prioritisation.
Previously I was focused on Nuclear War, but I no longer think it’s worth me working on, because it’s very intractable and the extinction risk is very low. I’ve also explored AI Safety (doing the AI Safety Fundamentals Course) but my coding really isn’t up to scratch at the moment.
The main thing I’m focusing on right now is cause prioritisation—I’m still quite sceptical of the theory of working on extinction risks.
Things I’ve done:
Non Trivial Fellowship. I produced an explainer of the risks posed by improved precision in nuclear warfare.
AI Safety Fundamentals. I produced this explainer of superposition: https://chrisclay.substack.com/p/what-is-superposition-in-neural-networks
The ’80% utilitarian’ approach you’re talking about makes more sense if you think of it as threshold deontology—which is basically where you’re utilitarian most of the time, but have strict ethical boundaries for extreme cases (e.g. don’t murder someone, even if that would somehow cause massive moral benefit). I think most EAs implicitly operate like this.
But I agree, most fields would benefit with less jargon.