2015-2016, we found 46 people who reported a plan change who (i) want to work on AI safety research (ii) are concerned by existential risks (iii) have studied a relevant quantitative subject, and (iv) switched towards this path due to us.
How useful do these plan changes feel? e.g. How senior are these people? They could be ranked from
1 “Undergrad” − 2 “Applying for grad school” 3 “Grad school/starting to write papers”, 4 “Postdoc” − 5 “Professor”. There could be similar questions about how safety-oriented they are, how quantitatively oriented and how high-achieving they are generally.
Cheers for the update.
How useful do these plan changes feel? e.g. How senior are these people? They could be ranked from 1 “Undergrad” − 2 “Applying for grad school” 3 “Grad school/starting to write papers”, 4 “Postdoc” − 5 “Professor”. There could be similar questions about how safety-oriented they are, how quantitatively oriented and how high-achieving they are generally.
Hey Ryan,
Good question. There’s some stats here:
https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/#technical-ai-safety-research-pipeline-of-50-people
Just let me know if you have more questions after that.