This focus on long-term field-building and trajectory change is different to biorisk, or short-timeline AI safety. For these two causes, there is risk of lock-in of some very bad state (extinction, or worse) sometime soon. This means itâs more urgent to do direct work right now to avoid the lock-in.
Did you mean:
(1) âThe urgency for direct work right now is greater in biorisk and short-timeline AI safety than in global poverty, animal welfare, or mental health, because of the greater the chance of lock-in in relation to biorisk and short-timeline AI safetyâ?
Or (2) âIn biorisk and short-timeline AI safety, itâs more urgent to do direct work right now to avoid lock-in than to do long-term field-building and trajectory changeâ?
If you mean (1), I agree, and think that thatâs a good point. (It doesnât seem the case is 100% settled, but it seems to me clear enough to act on.)
If you mean (2), I think thatâs less clear. I donât disagree; I just donât know. Since youâre specifying short-timeline AI safety, that does push in favour of direct work right now. But even a âshort-timelineâ might be decades, in which case field-building and trajectory change might be better. And biorisk may be with us for decades or centuries (perhaps partly dependent also on AI timelines).
(I hope to post soon about the matter of optimal timing for work or donations, outlining in a structured way (hopefully) all the key arguments and questions people have raised.)
(Minor point)
Did you mean:
(1) âThe urgency for direct work right now is greater in biorisk and short-timeline AI safety than in global poverty, animal welfare, or mental health, because of the greater the chance of lock-in in relation to biorisk and short-timeline AI safetyâ?
Or (2) âIn biorisk and short-timeline AI safety, itâs more urgent to do direct work right now to avoid lock-in than to do long-term field-building and trajectory changeâ?
If you mean (1), I agree, and think that thatâs a good point. (It doesnât seem the case is 100% settled, but it seems to me clear enough to act on.)
If you mean (2), I think thatâs less clear. I donât disagree; I just donât know. Since youâre specifying short-timeline AI safety, that does push in favour of direct work right now. But even a âshort-timelineâ might be decades, in which case field-building and trajectory change might be better. And biorisk may be with us for decades or centuries (perhaps partly dependent also on AI timelines).
(I hope to post soon about the matter of optimal timing for work or donations, outlining in a structured way (hopefully) all the key arguments and questions people have raised.)
Yep I meant (1) - thanks for checking. Also, that post sounds greatâlet me know if you want me to look over a draft :)