You might browse Intro to Brain-Like-AGI Safety , or check back in a few weeks once it’s all published. Towards the end of the sequence Steve intends to include “a list of open questions and advice for getting involved in the field.”
DeepMind takes a fair amount of inspiration from neuroscience.
Diving in to their related papers might be worthwhile, though the emphasis is often on capabilities rather than safety.
Your personal fit is a huge consideration when evaluating the two paths (80k hours might be able to help you think through this). But if you’re on the fence, I’d lean towards the more technical degree.
We needn’t take on reputational risk unnecessarily, but if it is possible for EAs to coordinate to stop a Cultural Revolution, that would seem to be a Cause X candidate. Toby Ord describes a great-power war as an existential risk factor, as it would hurt our odds on: AI, nuclear war, and climate change, all at once. I think losing free expression would also qualify as an existential risk factor.