“And it seems to me that the stories that I have for how my work ends up making a difference to the world, most of those are just look really unlikely to work if AGI is more than 50 years off. It’s really hard to do research that impacts the world positively more than 50 years down the road.”
This was nice to read, because I’m not sure I’ve ever seen anyone actually admit this before.
You say you think there’s a 70% chance of AGI in the next 50 years. How low would that probability have to be before you’d say, “Okay, we’ve got a reasonable number of people to work on this risk, we don’t really need to recruit new people into AI safety”?
This was nice to read, because I’m not sure I’ve ever seen anyone actually admit this before.
Not everyone agrees with me on this point. Many safety researchers think that their path to impact is by establishing a strong research community around safety, which seems more plausible as a mechanism to affect the world 50 years out than the “my work is actually relevant” plan. (And partially for this reason, these people tend to do different research to me.)
You say you think there’s a 70% chance of AGI in the next 50 years. How low would that probability have to be before you’d say, “Okay, we’ve got a reasonable number of people to work on this risk, we don’t really need to recruit new people into AI safety”?
I don’t know what the size of the AI safety field is such that marginal effort is better spent elsewhere. Presumably this is a continuous thing rather than a discrete thing. Eg it seems to me that now compared to five years ago, there are way more people in AI safety and so if your comparative advantage is in some other way of positively influencing the future, you should more strongly consider that other thing.
“And it seems to me that the stories that I have for how my work ends up making a difference to the world, most of those are just look really unlikely to work if AGI is more than 50 years off. It’s really hard to do research that impacts the world positively more than 50 years down the road.”
This was nice to read, because I’m not sure I’ve ever seen anyone actually admit this before.
You say you think there’s a 70% chance of AGI in the next 50 years. How low would that probability have to be before you’d say, “Okay, we’ve got a reasonable number of people to work on this risk, we don’t really need to recruit new people into AI safety”?
Not everyone agrees with me on this point. Many safety researchers think that their path to impact is by establishing a strong research community around safety, which seems more plausible as a mechanism to affect the world 50 years out than the “my work is actually relevant” plan. (And partially for this reason, these people tend to do different research to me.)
I don’t know what the size of the AI safety field is such that marginal effort is better spent elsewhere. Presumably this is a continuous thing rather than a discrete thing. Eg it seems to me that now compared to five years ago, there are way more people in AI safety and so if your comparative advantage is in some other way of positively influencing the future, you should more strongly consider that other thing.