“25 to 35 years before we think most of this risk will occur. That is a long time”
Is it really?
Another reason for doing direct work sooner is that if the amount of AI safety work being performed is growing, then by working sooner, you will be able to do a larger fraction of the total.
E.g. if you think that AI risks might arrive in 10 or 50 years, and you think that a lot of AI safety research is going to happen after 20 years, then your relative contribution may be larger if AI arrives in 10 years, making it good to research soon.
“by working sooner, you will be able to do a larger fraction of the total.”
You mean because of the diminishing returns to this work? If that’s what you mean, I’d respond that by grabbing the low-hanging fruit you leave less low-hanging fruit to others. This makes their contributions less effective. These effects should cancel out.
A different case would be if the amount of AI safety work being done increases as a function of the work that has already been done (rather than as a function of time or of general AI progress). Then you would expect a logarithmic/exponential increase of AI safety work over time. In this case, grabbing the low-hanging fruit sooner would shift progress in the field forward more than if you contributed later, as you said.
I don’t think this is the case for AI safety research though, but it could be the case for a technology like cultured meat for example.
I didn’t quite understand you’re example though, so this might be a misunderstanding. I guess what you mean is that for a risk where we might be in a dangerous phase for longer (e.g. syn bio), the safety work should be done sooner, but it may mostly get done after the risk arrives?
That would be true. But the point would remain that work done decades before the risk appears has lower value.
If you thought that AI could arrive in 10 years and the safety work would only get done in 20 years that’s a reason to do the work more quickly of course. But I don’t think that’s actually what you mean?
“25 to 35 years before we think most of this risk will occur. That is a long time”
Is it really? Another reason for doing direct work sooner is that if the amount of AI safety work being performed is growing, then by working sooner, you will be able to do a larger fraction of the total.
E.g. if you think that AI risks might arrive in 10 or 50 years, and you think that a lot of AI safety research is going to happen after 20 years, then your relative contribution may be larger if AI arrives in 10 years, making it good to research soon.
“by working sooner, you will be able to do a larger fraction of the total.”
You mean because of the diminishing returns to this work? If that’s what you mean, I’d respond that by grabbing the low-hanging fruit you leave less low-hanging fruit to others. This makes their contributions less effective. These effects should cancel out.
A different case would be if the amount of AI safety work being done increases as a function of the work that has already been done (rather than as a function of time or of general AI progress). Then you would expect a logarithmic/exponential increase of AI safety work over time. In this case, grabbing the low-hanging fruit sooner would shift progress in the field forward more than if you contributed later, as you said.
I don’t think this is the case for AI safety research though, but it could be the case for a technology like cultured meat for example.
I didn’t quite understand you’re example though, so this might be a misunderstanding. I guess what you mean is that for a risk where we might be in a dangerous phase for longer (e.g. syn bio), the safety work should be done sooner, but it may mostly get done after the risk arrives?
That would be true. But the point would remain that work done decades before the risk appears has lower value.
If you thought that AI could arrive in 10 years and the safety work would only get done in 20 years that’s a reason to do the work more quickly of course. But I don’t think that’s actually what you mean?