I think it’s a bad idea for most people to do Neuroscience PhDs. PhDs in general are not optimised for truth seeking, working on high impact projects, or maximising your personal wellbeing. In fact, rates of anxiety and depression are higher amongst graduate students than the population of people with college degrees of similar age. You also get paid extremely badly, which is a problem for people with families or other financial commitments. For any specific question you want to ask, it seems worth investigating if you can do the same work in industry or at a non-profit, just to see if you would be able to study the same questions in a more focused way outside of academia.
So I don’t think doing a Neuro PhD is the most effective route to working on AI Safety. That said, there seem to be some useful research directions if you want to pursue a Neuro PhD program anyway. Some examples include: interpretability work that can be translated from natural to artificial neural networks; specifically studying neural learning algorithms; or doing completely computational research, aka a backdoor CS PhD while fitting your models to neural data collected by other people. (CS PhD programs are insanely competitive right now, and Neuroscience professors are desperate for lab members who know how to code, so this is one way into a computational academic program at a top university if you’re ok working on Neuroscience relevant research questions.)
Vael Gates (who did a Computational/Cognitive Neuroscience PhD with Tom Griffiths, one of the leaders of this field), has some further thoughts that they’ve written up in this EA Forum post. I completely agree with their assessment of neuroscience research from the perspective of AI Safety research here:
Final note: cellular/molecular neuroscience, circuit-level neuroscience, cognitive neuroscience, and computational neuroscience are some of the divisions within neuroscience, and the skills in each of these subfields have different levels of applicability to AI. My main point is I don’t think any of these without an AI / computational background will help you contribute much to AI safety, though I expect that most computational neuroscientists and a good subset of cognitive neuroscientists will indeed have AI-relevant computational backgrounds. One can ask me what fields I think would be readily deployed towards AI safety without any AI background, and my answer is: math, physics (because of its closeness to math), maybe philosophy and theoretical economics (game theory, principle-agent, etc.)? I expect everyone else without exposure to AI will have to reskill if they’re interested in AI safety, with that being easier if one has a technical background. People just sometimes seem to expect pure neuroscience (absent computational subfields) and social science backgrounds to be unusually useful without further AI grounding, and I’m worried that this is trying to be inclusive when it’s not actually the case that these backgrounds alone are useful.
Going slightly off tangent: your original question specifically mentions moral uncertainty. I share Geoffrey Miller’s views in his comment on this thread, that Psychology is a more useful discipline to study moral uncertainty compared to Neuroscience.
If you want to focus on moral uncertainty, you can collect way more information from a much more diverse set of individuals if you focus on behaviour instead of neural activity. As Geoffrey mentions, it is *much* easier/cheaper to study people’s opinions or behaviour than it is to study their neural activity. For example, it costs ~$5 to pay somebody to take a quick survey on moral decisions, vs. about $500 an hour to run an fMRI scanner for one subject to collect a super messy dataset that’s incredibly different to interpret. People do take research more seriously if you slap a photo of a brain on it, but that doesn’t mean the brain data adds anything more than aesthetic value.
It might make sense for you to check out what EA Psychologists are actually doing to see if their research seems more up your alley compared to the neuroscience questions you’re interested in. A good place to start is here: https://www.eapsychology.org/
I think it’s a bad idea for most people to do Neuroscience PhDs. PhDs in general are not optimised for truth seeking, working on high impact projects, or maximising your personal wellbeing. In fact, rates of anxiety and depression are higher amongst graduate students than the population of people with college degrees of similar age. You also get paid extremely badly, which is a problem for people with families or other financial commitments. For any specific question you want to ask, it seems worth investigating if you can do the same work in industry or at a non-profit, just to see if you would be able to study the same questions in a more focused way outside of academia.
So I don’t think doing a Neuro PhD is the most effective route to working on AI Safety. That said, there seem to be some useful research directions if you want to pursue a Neuro PhD program anyway. Some examples include: interpretability work that can be translated from natural to artificial neural networks; specifically studying neural learning algorithms; or doing completely computational research, aka a backdoor CS PhD while fitting your models to neural data collected by other people. (CS PhD programs are insanely competitive right now, and Neuroscience professors are desperate for lab members who know how to code, so this is one way into a computational academic program at a top university if you’re ok working on Neuroscience relevant research questions.)
Vael Gates (who did a Computational/Cognitive Neuroscience PhD with Tom Griffiths, one of the leaders of this field), has some further thoughts that they’ve written up in this EA Forum post. I completely agree with their assessment of neuroscience research from the perspective of AI Safety research here:
Going slightly off tangent: your original question specifically mentions moral uncertainty. I share Geoffrey Miller’s views in his comment on this thread, that Psychology is a more useful discipline to study moral uncertainty compared to Neuroscience.
On the flip side, I think psychologists have done very interesting/useful research on human values (see this paper on how normal people think about population ethics, also eloquently written up as a shorter/more readable EA Forum post here). In this vein, I’ve also been very impressed by work produced by psychologists working with empirical philosophers, for example this paper on the Psychology of Existential Risk.
If you want to focus on moral uncertainty, you can collect way more information from a much more diverse set of individuals if you focus on behaviour instead of neural activity. As Geoffrey mentions, it is *much* easier/cheaper to study people’s opinions or behaviour than it is to study their neural activity. For example, it costs ~$5 to pay somebody to take a quick survey on moral decisions, vs. about $500 an hour to run an fMRI scanner for one subject to collect a super messy dataset that’s incredibly different to interpret. People do take research more seriously if you slap a photo of a brain on it, but that doesn’t mean the brain data adds anything more than aesthetic value.
It might make sense for you to check out what EA Psychologists are actually doing to see if their research seems more up your alley compared to the neuroscience questions you’re interested in. A good place to start is here: https://www.eapsychology.org/
Abby—excellent advice. This is consistent with what I’ve seen in neuroscience, psychology, and PhD programs in general.
Thanks! I agreed/appreciated your thoughts on how Psych can actually be relevant to human value alignment as well, especially compared to Neuro!