Some thoughts: 1) Most importantly: In your planning, I would explicitly include the variable of how happy you are. In particular, if the AI Safety option would result in a break-up of a long-term & happy relationship, or cause you to be otherwise miserable, it is totally legitimate to not do the AI Safety option. Even if it was higher “direct” impact. (If you need an impact-motivated excuse—which might even be true—then think about the indirect impact of avoiding signalling “we only want people who are so hardcore that they will be miserable just to do this job”.)
2) My guess: Given that you think your QC work is unlikely to be relevant to AI Safety, I personally believe that (ignoring the effect on you), the AI Safety job is higher impact.
3) Why is it hard to hire world experts to work on this? (Some thoughts, possibly overlapping with what other people wrote.)
“world experts in AI/ML” are—kinda tautologically—experts in AI/ML, not in AI Safety. (EG, “even” you and me have more “AI Safety” expertise than most AI/ML experts.)
Most problems around AI Safety seem vague, and thus hard to delegate to people who don’t have their own models of the topic. Such models take time to develop. So these people might not be productive for a year (or two? or more? I am not sure) even if they are genuine about AI Safety work.
Top people might be more motivated by prestige than money. (And being “bought off” seems bad from this point of view, I guess.)
Top people might be more motivated by personal beliefs than money. (So the bottleneck is convincing them, not money.)
4) I am tempted to say that all the people who could be effectively bought with money are already being bought with money, so you donating doesn’t help here. But I think a more careful phrasing is “recruiting existing experts is bottlenecked on other things than money (including people coming up with good recruiting strategies)”.
5) Phrased differently: In our quest for developing the AI Safety field, there is basically no tradeoff between “hiring ‘more junior’ people (like you)” and “recruiting senior people”, even if those more junior people would go earning to give otherwise.
Agreed. The AIS job will have higher direct impact, but career transitions and relocating are both difficult. Before taking the plunge, I’d suggest people consider whether they would be happy with the move. And whether they have thought through some of the sacrifices involved, for instance, if the transition to AIS research is only partially successful, would they be happy spending time on non-research activities like directing funds or advising talent?
Thanks for your comments Ryan :) I think I would be ok if I try and fail; of course I would prefer a lot more succeding, but I think I am happier if I know I’m doing the best I can do than if I try to compare myself to some unattainable level.
That being said there is some sacrifice as you mention particularly in having learned a new research area and also in spending time away, both of which you understand :)
+1 to all of this. Sounds like a very tough decision. If it were me, I would probably choose quality of life and stick with the startup. (Might also donate to areas that are more funding constrained like global development and animal welfare.)
Some thoughts:
1) Most importantly: In your planning, I would explicitly include the variable of how happy you are. In particular, if the AI Safety option would result in a break-up of a long-term & happy relationship, or cause you to be otherwise miserable, it is totally legitimate to not do the AI Safety option. Even if it was higher “direct” impact. (If you need an impact-motivated excuse—which might even be true—then think about the indirect impact of avoiding signalling “we only want people who are so hardcore that they will be miserable just to do this job”.)
2) My guess: Given that you think your QC work is unlikely to be relevant to AI Safety, I personally believe that (ignoring the effect on you), the AI Safety job is higher impact.
3) Why is it hard to hire world experts to work on this? (Some thoughts, possibly overlapping with what other people wrote.)
“world experts in AI/ML” are—kinda tautologically—experts in AI/ML, not in AI Safety. (EG, “even” you and me have more “AI Safety” expertise than most AI/ML experts.)
Most problems around AI Safety seem vague, and thus hard to delegate to people who don’t have their own models of the topic. Such models take time to develop. So these people might not be productive for a year (or two? or more? I am not sure) even if they are genuine about AI Safety work.
Top people might be more motivated by prestige than money. (And being “bought off” seems bad from this point of view, I guess.)
Top people might be more motivated by personal beliefs than money. (So the bottleneck is convincing them, not money.)
4) I am tempted to say that all the people who could be effectively bought with money are already being bought with money, so you donating doesn’t help here. But I think a more careful phrasing is “recruiting existing experts is bottlenecked on other things than money (including people coming up with good recruiting strategies)”.
5) Phrased differently: In our quest for developing the AI Safety field, there is basically no tradeoff between “hiring ‘more junior’ people (like you)” and “recruiting senior people”, even if those more junior people would go earning to give otherwise.
Agreed. The AIS job will have higher direct impact, but career transitions and relocating are both difficult. Before taking the plunge, I’d suggest people consider whether they would be happy with the move. And whether they have thought through some of the sacrifices involved, for instance, if the transition to AIS research is only partially successful, would they be happy spending time on non-research activities like directing funds or advising talent?
Thanks for your comments Ryan :) I think I would be ok if I try and fail; of course I would prefer a lot more succeding, but I think I am happier if I know I’m doing the best I can do than if I try to compare myself to some unattainable level. That being said there is some sacrifice as you mention particularly in having learned a new research area and also in spending time away, both of which you understand :)
+1 to all of this. Sounds like a very tough decision. If it were me, I would probably choose quality of life and stick with the startup. (Might also donate to areas that are more funding constrained like global development and animal welfare.)
Thanks for making concrete bets @aogara :)