Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction—an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)
I guess the fact that EA is a quite philosophical movement may be a reason why there’s been a substantial (but by no means exclusive) focus on the philosophical argument. It’s also easier to convey quickly, whereas the empirical argument requires much more time.
To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
I liked this comment.
Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction—an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)
I guess the fact that EA is a quite philosophical movement may be a reason why there’s been a substantial (but by no means exclusive) focus on the philosophical argument. It’s also easier to convey quickly, whereas the empirical argument requires much more time.
This sentence wasn’t quite clear to me.