I’d love to see a detailed answer to this question.
I think a key bottleneck for AI alignment at the moment is finding people who can identify research directions (and then lead relevant projects) that might actually reduce x-risk, so I’m also confused why some career guides include software and ML engineering as one of the best ways to contribute. I struggle to see how software and ML engineering could be a bottleneck given that there are so many talented software and ML engineers out there. Counterpoint: infohazards mean you can’t just hire anyone.
What stops AI Safety orgs from just hiring ML talent outside EA for their junior/more generic roles?
I’d love to see a detailed answer to this question.
I think a key bottleneck for AI alignment at the moment is finding people who can identify research directions (and then lead relevant projects) that might actually reduce x-risk, so I’m also confused why some career guides include software and ML engineering as one of the best ways to contribute. I struggle to see how software and ML engineering could be a bottleneck given that there are so many talented software and ML engineers out there. Counterpoint: infohazards mean you can’t just hire anyone.