I do think that this turned out well for me, and that I would have been significantly worse off if I hadn’t started working in safety directly. But this was partly a lucky coincidence, since I didn’t intend to become a philosopher three years ago when making this decision. If I hadn’t gotten a job at DeepMind, then my underestimate of the usefulness of upskilling might have led me astray.
I agree it’s partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that’s directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there’s more of in direct work.)
Yepp, I agree with this. On the other hand, since AI safety is mentorship-constrained, if you have good opportunities to upskill in mainstream ML, then that frees up some resources for other people. And it also involves building up wider networks. So maybe “similar expected value” is a bit too strong, but not that much.
I do think that this turned out well for me, and that I would have been significantly worse off if I hadn’t started working in safety directly. But this was partly a lucky coincidence, since I didn’t intend to become a philosopher three years ago when making this decision. If I hadn’t gotten a job at DeepMind, then my underestimate of the usefulness of upskilling might have led me astray.
I agree it’s partly a lucky coincidence, but I also count it as some general evidence. Ie., insofar as careers are unpredictable, up-skilling in a single area may be a bit less reliably good than expected, compared with placing yourself in a situation where you get exposed to lots of information and inspiration that’s directly relevant to things you care about. (That last bit is unfortunately vague, but seems to gesture at something that there’s more of in direct work.)
Yepp, I agree with this. On the other hand, since AI safety is mentorship-constrained, if you have good opportunities to upskill in mainstream ML, then that frees up some resources for other people. And it also involves building up wider networks. So maybe “similar expected value” is a bit too strong, but not that much.