The most common reason that someone who I would be excited to work with at MIRI chooses not to work on AI alignment is that they decide to work on some other important thing instead, eg other x-risk or other EA stuff.
But here are some anonymized recent stories of talented people who decided to do non-EA work instead of taking opportunities to do important technical work related to x-risk (for context, I think all of these people are more technically competent than me):
One was very comfortable in a cushy, highly paid job which they already had, and thought it would be too inconvenient to move to an EA job (which would have also been highly paid).
One felt that AGI timelines are probably relatively long (eg they thought that the probability of AGI in the next 30 years felt pretty small to them), which made AI safety feel not very urgent. So they decided to take an opportunity which they thought would be really fun and exciting, rather than working at MIRI, which they thought would be less of a good fit for a particular skill set which they’d been developing for years; this person thinks that they might come back and work on x-risk after they’ve had another job for a few years.
One was in the middle of a PhD and didn’t want to leave.
One felt unsure about whether it’s reasonable to believe all the unusual things that the EA community believes, and didn’t believe the arguments enough that they felt morally compelled to leave their current lucrative job.
I feel sympathetic to the last three but not to the first.
The most common reason that someone who I would be excited to work with at MIRI chooses not to work on AI alignment is that they decide to work on some other important thing instead, eg other x-risk or other EA stuff.
But here are some anonymized recent stories of talented people who decided to do non-EA work instead of taking opportunities to do important technical work related to x-risk (for context, I think all of these people are more technically competent than me):
One was very comfortable in a cushy, highly paid job which they already had, and thought it would be too inconvenient to move to an EA job (which would have also been highly paid).
One felt that AGI timelines are probably relatively long (eg they thought that the probability of AGI in the next 30 years felt pretty small to them), which made AI safety feel not very urgent. So they decided to take an opportunity which they thought would be really fun and exciting, rather than working at MIRI, which they thought would be less of a good fit for a particular skill set which they’d been developing for years; this person thinks that they might come back and work on x-risk after they’ve had another job for a few years.
One was in the middle of a PhD and didn’t want to leave.
One felt unsure about whether it’s reasonable to believe all the unusual things that the EA community believes, and didn’t believe the arguments enough that they felt morally compelled to leave their current lucrative job.
I feel sympathetic to the last three but not to the first.