It seems like there are many more people that want to get into AI Safety, and MIRI’s fundumental research, than there is room to mentor and manage them. There are also many independent / volunteer researchers.
It seems that your current strategy is to focus on training, hiring and outreaching to the most promising talented individuals.
Other alternatives might include more engagement with amatures, and providing more assistance for groups and individuals that want to learn and conduct independent research.
Do you see it the same way? This strategy makes a lot of sense, but I am curious to your take on it.
Also, what would change if you had 10 times the amount of management and mentorship capacity?
It seems that your current strategy is to focus on training, hiring and outreaching to the most promising talented individuals.
This seems like a pretty good summary of the strategy I work on, and it’s the strategy that I’m most optimistic about.
Other alternatives might include more engagement with amatures, and providing more assistance for groups and individuals that want to learn and conduct independent research.
I think that it would be quite costly and difficult for more experienced AI safety researchers to try to cause more good research to happen by engaging more with amateurs or providing more assistance to independent research. So I think that experienced AI safety researchers are probably going to do more good by spending more time on their own research than by trying to help other people with theirs. This is because I think that experienced and skilled AI safety researchers are much more productive than other people, and because I think that a reasonably large number of very talented math/CS people become interested in AI safety every year, so we can set a pretty high bar for which people to spend a lot of time with.
Also, what would change if you had 10 times the amount of management and mentorship capacity?
If I had ten times as many copies of various top AI safety researchers and I could only use them for management and mentorship capacity, I’d try to get them to talk to many more AI safety researchers, through things like weekly hour-long calls with PhD students, or running more workshops like MSFP.
It seems like there are many more people that want to get into AI Safety, and MIRI’s fundumental research, than there is room to mentor and manage them. There are also many independent / volunteer researchers.
It seems that your current strategy is to focus on training, hiring and outreaching to the most promising talented individuals. Other alternatives might include more engagement with amatures, and providing more assistance for groups and individuals that want to learn and conduct independent research.
Do you see it the same way? This strategy makes a lot of sense, but I am curious to your take on it. Also, what would change if you had 10 times the amount of management and mentorship capacity?
This seems like a pretty good summary of the strategy I work on, and it’s the strategy that I’m most optimistic about.
I think that it would be quite costly and difficult for more experienced AI safety researchers to try to cause more good research to happen by engaging more with amateurs or providing more assistance to independent research. So I think that experienced AI safety researchers are probably going to do more good by spending more time on their own research than by trying to help other people with theirs. This is because I think that experienced and skilled AI safety researchers are much more productive than other people, and because I think that a reasonably large number of very talented math/CS people become interested in AI safety every year, so we can set a pretty high bar for which people to spend a lot of time with.
If I had ten times as many copies of various top AI safety researchers and I could only use them for management and mentorship capacity, I’d try to get them to talk to many more AI safety researchers, through things like weekly hour-long calls with PhD students, or running more workshops like MSFP.