We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych.
Caveats:
Personal fit could dominate this equation though, so I’d be excited about people tackling AI safety from a variety of fields.
It’s an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
it seems more useful to learn ML rather than cog sci/psych.
Got it. To clarify: if the question as framed as “should AI safety researchers learn ML, or should they learn cogsci/psych”, then I agree that it seems better to learn ML.
We agree these are technical problems, but for most people, all else being equal, it seems more useful to learn ML rather than cog sci/psych. Caveats:
Personal fit could dominate this equation though, so I’d be excited about people tackling AI safety from a variety of fields.
It’s an equilibrium. The more people already attacking a problem using one toolkit, the more we should be sending people to learn other toolkits to attack it.
Got it. To clarify: if the question as framed as “should AI safety researchers learn ML, or should they learn cogsci/psych”, then I agree that it seems better to learn ML.