I really liked this episode, because of Carl’s no nonsense moderate approach. Though I must say that I’m a bit surprised that it appears that some in the EA community see the ‘commonsense argument’ as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd (“Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations”, 16 Oct, 2021). I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I said as much in my 2018 paper on New Zealand and Existential Risks (see p.63 here). I thought I was pretty late to the party at that point, and Carl was probably years down the track.
However, if this argument is not widely understood (and that’s a big ‘if’ because I think really it should be pretty easy for anyone to have deduced), then I wonder why? Maybe this is because the origins of the EA focus on x-risk hark back to papers like the ‘Astronomical Waste’ arguments etc, which basically take long-termism as the starting point and then argue for the importance of existential risk reduction. Whereas if you take government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-QALY is the currency. Then existential risk just looks like a limiting case of these CEAs and the priority they hold just emerges in the calculation (when only considering THE PRESENT generation).
The real question then becomes, WHY don’t government risk assessments and CEAs plug in the probabilities and impacts for x-risk? Two key suppositions are unfamiliarity (ie a knowledge gap) or intractability (ie a lack of policy response options). Whereas both of these have now progressed substantially.
The reason all this is important is because in the eyes of government policymakers and more importantly Ministers with power to make decisions about resource allocation, longtermism (especially in its strong form) is seen as somewhat esoteric and disconnected from day to day business. Whereas it seems the objectives of strong longtermism (if indeed it stands up to empirical challenges, eg how Fermi’s paradox is resolved will have implications for the strength of strong longtermism) can be met through simple ordinary CEA arguments. Or at least such arguments can be used for leverage. To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction—an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)
I guess the fact that EA is a quite philosophical movement may be a reason why there’s been a substantial (but by no means exclusive) focus on the philosophical argument. It’s also easier to convey quickly, whereas the empirical argument requires much more time.
To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
I really liked this episode, because of Carl’s no nonsense moderate approach. Though I must say that I’m a bit surprised that it appears that some in the EA community see the ‘commonsense argument’ as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd (“Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations”, 16 Oct, 2021). I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I said as much in my 2018 paper on New Zealand and Existential Risks (see p.63 here). I thought I was pretty late to the party at that point, and Carl was probably years down the track.
However, if this argument is not widely understood (and that’s a big ‘if’ because I think really it should be pretty easy for anyone to have deduced), then I wonder why? Maybe this is because the origins of the EA focus on x-risk hark back to papers like the ‘Astronomical Waste’ arguments etc, which basically take long-termism as the starting point and then argue for the importance of existential risk reduction. Whereas if you take government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-QALY is the currency. Then existential risk just looks like a limiting case of these CEAs and the priority they hold just emerges in the calculation (when only considering THE PRESENT generation).
The real question then becomes, WHY don’t government risk assessments and CEAs plug in the probabilities and impacts for x-risk? Two key suppositions are unfamiliarity (ie a knowledge gap) or intractability (ie a lack of policy response options). Whereas both of these have now progressed substantially.
The reason all this is important is because in the eyes of government policymakers and more importantly Ministers with power to make decisions about resource allocation, longtermism (especially in its strong form) is seen as somewhat esoteric and disconnected from day to day business. Whereas it seems the objectives of strong longtermism (if indeed it stands up to empirical challenges, eg how Fermi’s paradox is resolved will have implications for the strength of strong longtermism) can be met through simple ordinary CEA arguments. Or at least such arguments can be used for leverage. To actually achieve the goals of longtermism it seems like MUCH more work needs to be happening in translational research to communicate academic x-risk work into policymakers’ language for instrumental ends, not necessarily in strictly ‘correct’ ways.
I liked this comment.
Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction—an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)
I guess the fact that EA is a quite philosophical movement may be a reason why there’s been a substantial (but by no means exclusive) focus on the philosophical argument. It’s also easier to convey quickly, whereas the empirical argument requires much more time.
This sentence wasn’t quite clear to me.