Your comment seems to suggest that you don’t think the arguments in my post are relevant for technical AI safety research. Do you feel that I didn’t make a persuasive case for psych/cogsci being relevant for value learning/multi-level world-models research, or do you not count these as technical AI safety research? Or am I misunderstanding you somehow?
I agree that the “understanding psychology may help persuade more people to work on/care about AI safety” and “analyzing human intelligences may suggest things about takeoff scenarios” points aren’t related to technical safety research, but value learning and multi-level world-models are very much technical problems to me.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych.
Caveats:
Personal fit could dominate this equation though, so I’d be excited about people tackling AI safety from a variety of fields.
It’s an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
it seems more useful to learn ML rather than cog sci/psych.
Got it. To clarify: if the question as framed as “should AI safety researchers learn ML, or should they learn cogsci/psych”, then I agree that it seems better to learn ML.
Hi Peter, thanks for the response!
Your comment seems to suggest that you don’t think the arguments in my post are relevant for technical AI safety research. Do you feel that I didn’t make a persuasive case for psych/cogsci being relevant for value learning/multi-level world-models research, or do you not count these as technical AI safety research? Or am I misunderstanding you somehow?
I agree that the “understanding psychology may help persuade more people to work on/care about AI safety” and “analyzing human intelligences may suggest things about takeoff scenarios” points aren’t related to technical safety research, but value learning and multi-level world-models are very much technical problems to me.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats:
Personal fit could dominate this equation though, so I’d be excited about people tackling AI safety from a variety of fields.
It’s an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
Got it. To clarify: if the question as framed as “should AI safety researchers learn ML, or should they learn cogsci/psych”, then I agree that it seems better to learn ML.