Based on talking to various researchers, I’d say there are fewer than 50 people doing promising work on existential AI safety, and fewer than 200 thinking about AI safety full-time in any reasonable framing of the problem.
If you think that AI safety is 10x as large as, say, biorisk, and returns are logarithmic, we should allocate 10x the resources to AI safety as biorisk. And biorisk is still larger than most causes. So it’s fine for AI safety to not be quite as neglected as the most neglected causes.
Which leads to the question of how we can get more people to produce promising work in AI safety. There are plenty of highly intelligent people out there who are capable of doing work in AI safety, yet almost none of them do. Maybe trying to popularize AI safety would help to indirectly contribute to it, since it might help to convince geniuses with the potential to work in AI safety to start working on it. It could also be an incentive problem. Maybe potential AI safety researchers think they can make more money by working in other fields, or maybe there are barriers that make it extremely difficult to become an AI safety researcher.
If you don’t mind me asking, which AI safety researchers do you think are doing the most promising work? Also, are there any AI safety researchers who you think are the least promising, or are doing work that is misguided or harmful?
Based on talking to various researchers, I’d say there are fewer than 50 people doing promising work on existential AI safety, and fewer than 200 thinking about AI safety full-time in any reasonable framing of the problem.
If you think that AI safety is 10x as large as, say, biorisk, and returns are logarithmic, we should allocate 10x the resources to AI safety as biorisk. And biorisk is still larger than most causes. So it’s fine for AI safety to not be quite as neglected as the most neglected causes.
Which leads to the question of how we can get more people to produce promising work in AI safety. There are plenty of highly intelligent people out there who are capable of doing work in AI safety, yet almost none of them do. Maybe trying to popularize AI safety would help to indirectly contribute to it, since it might help to convince geniuses with the potential to work in AI safety to start working on it. It could also be an incentive problem. Maybe potential AI safety researchers think they can make more money by working in other fields, or maybe there are barriers that make it extremely difficult to become an AI safety researcher.
If you don’t mind me asking, which AI safety researchers do you think are doing the most promising work? Also, are there any AI safety researchers who you think are the least promising, or are doing work that is misguided or harmful?