Heya, I’m not an AI guy anymore so I find these posts kinda tricky to wrap my head around. So I’m earnestly interested in understanding: If AGI is that close, surely the outcomes are completely overdetermined already? Or if they’re not, surely you only get to push the outcomes by at most 0.1% on the margins (which is meaningless if the outcome is extinction/not extinction)? Why do you feel like you have agency in this future?
I get that it can be tricky to think about these things.
I don’t think the outcomes are overdetermined—there are many research areas that can benefit a lot from additional effort, policy is high leverage and can absorb a lot more people, and advocacy is only starting and will grow enormously.
AGI being close possibly decreases tractability, but on the other hand increases neglectedness, as every additional person makes a larger relative increase in the total effort spent on AI safety.
Heya, I’m not an AI guy anymore so I find these posts kinda tricky to wrap my head around. So I’m earnestly interested in understanding: If AGI is that close, surely the outcomes are completely overdetermined already? Or if they’re not, surely you only get to push the outcomes by at most 0.1% on the margins (which is meaningless if the outcome is extinction/not extinction)? Why do you feel like you have agency in this future?
I get that it can be tricky to think about these things.
I don’t think the outcomes are overdetermined—there are many research areas that can benefit a lot from additional effort, policy is high leverage and can absorb a lot more people, and advocacy is only starting and will grow enormously.
AGI being close possibly decreases tractability, but on the other hand increases neglectedness, as every additional person makes a larger relative increase in the total effort spent on AI safety.
The fact that it’s about extinction increases, not decreases, the value of marginally shifting the needle. Working on AI safety saves thousands of present human lives on expectation.