I get that it can be tricky to think about these things.
I don’t think the outcomes are overdetermined—there are many research areas that can benefit a lot from additional effort, policy is high leverage and can absorb a lot more people, and advocacy is only starting and will grow enormously.
AGI being close possibly decreases tractability, but on the other hand increases neglectedness, as every additional person makes a larger relative increase in the total effort spent on AI safety.
I get that it can be tricky to think about these things.
I don’t think the outcomes are overdetermined—there are many research areas that can benefit a lot from additional effort, policy is high leverage and can absorb a lot more people, and advocacy is only starting and will grow enormously.
AGI being close possibly decreases tractability, but on the other hand increases neglectedness, as every additional person makes a larger relative increase in the total effort spent on AI safety.
The fact that it’s about extinction increases, not decreases, the value of marginally shifting the needle. Working on AI safety saves thousands of present human lives on expectation.