This focus on long-term field-building and trajectory change is different to biorisk, or short-timeline AI safety. For these two causes, there is risk of lock-in of some very bad state (extinction, or worse) sometime soon. This means it’s more urgent to do direct work right now to avoid the lock-in.
Did you mean:
(1) “The urgency for direct work right now is greater in biorisk and short-timeline AI safety than in global poverty, animal welfare, or mental health, because of the greater the chance of lock-in in relation to biorisk and short-timeline AI safety”?
Or (2) “In biorisk and short-timeline AI safety, it’s more urgent to do direct work right now to avoid lock-in than to do long-term field-building and trajectory change”?
If you mean (1), I agree, and think that that’s a good point. (It doesn’t seem the case is 100% settled, but it seems to me clear enough to act on.)
If you mean (2), I think that’s less clear. I don’t disagree; I just don’t know. Since you’re specifying short-timeline AI safety, that does push in favour of direct work right now. But even a “short-timeline” might be decades, in which case field-building and trajectory change might be better. And biorisk may be with us for decades or centuries (perhaps partly dependent also on AI timelines).
(I hope to post soon about the matter of optimal timing for work or donations, outlining in a structured way (hopefully) all the key arguments and questions people have raised.)
(Minor point)
Did you mean:
(1) “The urgency for direct work right now is greater in biorisk and short-timeline AI safety than in global poverty, animal welfare, or mental health, because of the greater the chance of lock-in in relation to biorisk and short-timeline AI safety”?
Or (2) “In biorisk and short-timeline AI safety, it’s more urgent to do direct work right now to avoid lock-in than to do long-term field-building and trajectory change”?
If you mean (1), I agree, and think that that’s a good point. (It doesn’t seem the case is 100% settled, but it seems to me clear enough to act on.)
If you mean (2), I think that’s less clear. I don’t disagree; I just don’t know. Since you’re specifying short-timeline AI safety, that does push in favour of direct work right now. But even a “short-timeline” might be decades, in which case field-building and trajectory change might be better. And biorisk may be with us for decades or centuries (perhaps partly dependent also on AI timelines).
(I hope to post soon about the matter of optimal timing for work or donations, outlining in a structured way (hopefully) all the key arguments and questions people have raised.)
Yep I meant (1) - thanks for checking. Also, that post sounds great—let me know if you want me to look over a draft :)