I’m unwilling to pin this entirely on the epistemic uncertainty, and specifically don’t think everyone agrees that, for example, interventions targeting AI safety aren’t the only thing that matters, period. (Though this is arguably not even a longtermist position.)
But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
I’m unwilling to pin this entirely on the epistemic uncertainty, and specifically don’t think everyone agrees that, for example, interventions targeting AI safety aren’t the only thing that matters, period. (Though this is arguably not even a longtermist position.)
But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).