Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.
We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.
We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, so don’t think it’s the only valid path by any means.
We’re planning on releasing other guides and paths for non-technical approaches, such as the AI safety policy career guide, which also recommends studying political science and public policy, law, and ethics, among others.
Your comment seems to suggest that you don’t think the arguments in my post are relevant for technical AI safety research. Do you feel that I didn’t make a persuasive case for psych/cogsci being relevant for value learning/multi-level world-models research, or do you not count these as technical AI safety research? Or am I misunderstanding you somehow?
I agree that the “understanding psychology may help persuade more people to work on/care about AI safety” and “analyzing human intelligences may suggest things about takeoff scenarios” points aren’t related to technical safety research, but value learning and multi-level world-models are very much technical problems to me.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych.
Caveats:
Personal fit could dominate this equation though, so I’d be excited about people tackling AI safety from a variety of fields.
It’s an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
it seems more useful to learn ML rather than cog sci/psych.
Got it. To clarify: if the question as framed as “should AI safety researchers learn ML, or should they learn cogsci/psych”, then I agree that it seems better to learn ML.
Hi Kaj,
Thanks for writing this. Since you mention some 80,000 Hours content, I thought I’d respond briefly with our perspective.
We had intended the career review and AI safety syllabus to be about what you’d need to do from a technical AI research perspective. I’ve added a note to clarify this.
We agree that there a lot of approaches you could take to tackle AI risk, but currently expect that technical AI research will be where a large amount of the effort is required. However, we’ve also advised many people on non-technical routes to impacting AI safety, so don’t think it’s the only valid path by any means.
We’re planning on releasing other guides and paths for non-technical approaches, such as the AI safety policy career guide, which also recommends studying political science and public policy, law, and ethics, among others.
Hi Peter, thanks for the response!
Your comment seems to suggest that you don’t think the arguments in my post are relevant for technical AI safety research. Do you feel that I didn’t make a persuasive case for psych/cogsci being relevant for value learning/multi-level world-models research, or do you not count these as technical AI safety research? Or am I misunderstanding you somehow?
I agree that the “understanding psychology may help persuade more people to work on/care about AI safety” and “analyzing human intelligences may suggest things about takeoff scenarios” points aren’t related to technical safety research, but value learning and multi-level world-models are very much technical problems to me.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats:
Personal fit could dominate this equation though, so I’d be excited about people tackling AI safety from a variety of fields.
It’s an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
Got it. To clarify: if the question as framed as “should AI safety researchers learn ML, or should they learn cogsci/psych”, then I agree that it seems better to learn ML.