This was nice to read, because I’m not sure I’ve ever seen anyone actually admit this before.
Not everyone agrees with me on this point. Many safety researchers think that their path to impact is by establishing a strong research community around safety, which seems more plausible as a mechanism to affect the world 50 years out than the “my work is actually relevant” plan. (And partially for this reason, these people tend to do different research to me.)
You say you think there’s a 70% chance of AGI in the next 50 years. How low would that probability have to be before you’d say, “Okay, we’ve got a reasonable number of people to work on this risk, we don’t really need to recruit new people into AI safety”?
I don’t know what the size of the AI safety field is such that marginal effort is better spent elsewhere. Presumably this is a continuous thing rather than a discrete thing. Eg it seems to me that now compared to five years ago, there are way more people in AI safety and so if your comparative advantage is in some other way of positively influencing the future, you should more strongly consider that other thing.
Not everyone agrees with me on this point. Many safety researchers think that their path to impact is by establishing a strong research community around safety, which seems more plausible as a mechanism to affect the world 50 years out than the “my work is actually relevant” plan. (And partially for this reason, these people tend to do different research to me.)
I don’t know what the size of the AI safety field is such that marginal effort is better spent elsewhere. Presumably this is a continuous thing rather than a discrete thing. Eg it seems to me that now compared to five years ago, there are way more people in AI safety and so if your comparative advantage is in some other way of positively influencing the future, you should more strongly consider that other thing.