A lot of EAs do think AI safety could become ridiculously important (i.e. some probability mass of very short timelines) but are not in the position to do anything, which is why they focus on more tractable areas (i.e. global health, animal welfare, EA building) under the assumption of longer AI timelines. Especially because there’s a lot of uncertainty about when AGI would come.
My internal view is 25% of TAI by 2040 and 50% of TAI by 2060, where I define TAI as an AI with the ability to autonomously perform AI research. They may have shifted in light of DeepSeek but what am I supposed to do? I’m just a freshman college student at a non-prestigious university. Am I supposed to drop all commitments I have, speed-run my degree, get myself to work in a highly competitive AI lab which would probably require a Ph. D., work on technical alignment hoping to get a breakthrough? If TAI comes within 5 years, it would be the right move, but if I’m wrong then I would end up with very shallow skills without much experience.
We have the following Pascal matrix (drafted my GPT):
Decision
AGI Comes Soon (~2030s)
AGI Comes Late (~2060s+)
Rush into AI Now
🚀 Huge Impact, but only if positioned well
😬 Career stagnation, lower expertise
Stay on Current Path
😢 Missed critical decisions, lower impact
📈 Strong expertise, optimal positioning
I know the decision is not binary, but I am definitely willing to forfeit 25% of my impact by betting on the AGI comes late scenario. I do think non-AI cause areas should use AI projection in their deliberation and ToC but I think it is silly to cut out everything that happens after 2040 with respect to the cause area.
However, I do think EAs should have a contingency plan where they should speedrun to AI safety if and only if (one of multiple conditions occur; i.e. even conservative superforecastors project AGI before 2040, or a national emergency is declared). And we can probably hedge against the AGI comes soon scenario by buying long-term NVIDIA call options.
A lot of EAs do think AI safety could become ridiculously important (i.e. some probability mass of very short timelines) but are not in the position to do anything, which is why they focus on more tractable areas (i.e. global health, animal welfare, EA building) under the assumption of longer AI timelines. Especially because there’s a lot of uncertainty about when AGI would come.
My internal view is 25% of TAI by 2040 and 50% of TAI by 2060, where I define TAI as an AI with the ability to autonomously perform AI research. They may have shifted in light of DeepSeek but what am I supposed to do? I’m just a freshman college student at a non-prestigious university. Am I supposed to drop all commitments I have, speed-run my degree, get myself to work in a highly competitive AI lab which would probably require a Ph. D., work on technical alignment hoping to get a breakthrough? If TAI comes within 5 years, it would be the right move, but if I’m wrong then I would end up with very shallow skills without much experience.
We have the following Pascal matrix (drafted my GPT):
I know the decision is not binary, but I am definitely willing to forfeit 25% of my impact by betting on the AGI comes late scenario. I do think non-AI cause areas should use AI projection in their deliberation and ToC but I think it is silly to cut out everything that happens after 2040 with respect to the cause area.
However, I do think EAs should have a contingency plan where they should speedrun to AI safety if and only if (one of multiple conditions occur; i.e. even conservative superforecastors project AGI before 2040, or a national emergency is declared). And we can probably hedge against the AGI comes soon scenario by buying long-term NVIDIA call options.