Meditations on careers in AI Safety

I am currently facing the situation of having to choose between good options: either making a career change to do a postdoc in AI Safety, or staying in my current field of expertise (quantum algorithms) in one of the best quantum computing startups, which pays really well, allowing to earn to give, and also allowing me to remote work near my girlfriend (who brings me a lot of joy). I’m honestly quite confused about what it would be best for me to do.

As part of the decision process, I have talked with my family to whom I have carefully explained the problem of AI Safety and why experts believe is important. They have however raised a question that looks valid:

If the community has so much money, and we believe this is such an important problem, why can’t we just hire/​fund world experts in AI/​ML to work on it?

These are some of the answers I have heard in my local community as well as the AGI Safety Fundamentals slack channel:

  • Most experts are not aligned, in that they do not understand the AI Safety problem well enough. But this does not square well with the fact that they recognize this is an important problem. After all, it would be moot to say that they believe this is an important problem without really understanding it.

  • We want to grow the field slow enough that we can control the quality of the research and ensure we do not end up with a reputation crisis. Perhaps, but then should we still not focus on hiring/​funding AI experts rather than career changes from undergraduates or graduate students?

  • This is not a well-known enough problem. Same counter-answer as the previous one.

  • Most experts prefer working on topics where the problem is concrete enough that they can play with toy problems, and somewhat gamify the scientific process. I have found some evidence for this (https://​​scottaaronson.blog/​​?p=6288#comment-1928022 and https://​​twitter.com/​​CraigGidney/​​status/​​1489803239956508672?t=JCmx7SC4PhxIXs8om2js5g&s=19), but it is unclear. I like this argument.

  • Researchers do not find this problem interesting enough or may think the problem is not so important or very far away in time, and therefore they are not willing to accept the money to work on a different topic.

  • They have other responsibilities, with the people they work with and their area of expertise. They believe they are making a greater contribution by staying in their subfield. Money is also not an issue for them.

  • The field is so young that young people with not a lot of expertise are almost equally effective as seasoned AI professors. In other words, the field is preparadigmatic. But I think even if we still have to create the tools it is more likely that people with AI expertise will do better.

  • We need more time to organize ourselves or convince people because we are just getting started.

  • Maybe we’ve not done things right?

  • (Ryan Carey’s opinion): Senior researchers are less prone to changing their research field.

All of this suggests to me that the number 1 priority to solve AI Safety is making it concrete enough that we can make it easy for researchers to get adsorbed by small subproblems. For example, we could define a few concrete approaches that allow people to progress at a concrete level, even if we don’t solve AI Safety once and for all, as perhaps Yudkowski would hope.

In any case, my friend Jaime Sevilla argues that at a community level it is probably better to leave earning-to-give to people who can earn more than $1M. But I would like to understand better your thoughts on this decision and what I should do, as well as get a better understanding of why we can’t just “buy more experts to work on this problem”? Note that with the provided funding experts could themselves hire their postdocs, Ph.D. students… This may lead to fewer career changes though, as for example I have found it difficult to get a postdoc in AI because of my different background, which makes me prima facie less attractive candidate.

Thanks!