a sketch cost-benefit analysis (CBA) for whether the US should fund interventions reducing global catastrophic risk (roughly sections 2-4)
an argument for why longtermists should push for a policy of funding all those GCR-reducing interventions that pass a cost-benefit analysis test and no more (except to the extent that a government should account for its citizens’ altruistic preferences, which in turn can be influenced by longtermism)
“That is because (1) unlike a strong longtermist policy, a CBA-driven policy would be democratically acceptable and feasible to implement, and (2) a CBA-driven policy would reduce existential risk by almost as much as a strong longtermist policy.”
I think the second part presents more novel arguments for readers of the Forum, but the first part is an interesting exercise, and important to sketch out to make the argument in part two.
Assorted thoughts below.
1. A graph
I want to flag a graph from further into the post that some people might miss (“The x-axis represents U.S. lives saved (discounted by how far in the future the life is saved) in expectation per dollar. The y-axis represents existential-risk-reduction per dollar. Interventions to the right of the blue line would be funded by a CBA-driven catastrophe policy. The exact position of each intervention is provisional and unimportant, and the graph is not to scale in any case… ”):
2. Outlining the cost-benefit analysis
I do feel like a lot of the numbers used for the sketch CBA are hard to defend, but I get the sense that you’re approaching those as givens, and then asking what e.g. people in the US government should do if they find the assumptions reasonable. At a brief skim, the support for “how much the interventions in question would reduce risk” seems to be the weakest (and I am a little worried about how this is approached — flagged below).
I’ve pulled out some fragments that produce a ~BOTEC for the cost-effectiveness of a set of interventions from the US government’s perspective (bold mine):
A “global catastrophe” is an event that kills at least 5 billion people. The model assumes that each person’s risk of dying in a global catastrophe is equal.
Overall risk of a global catastrophe: “Assuming independence and combining Ord’s risk-estimates of 10% for AI, 3% for engineered pandemics, and 5% for nuclear war gives us at least a 17% risk of global catastrophe from these sources over the next 100 years.[8]If we assume that the risk per decade is constant, the risk over the next decade is about 1.85%.[9] If we assume also that every person’s risk of dying in this kind of catastrophe is equal, then (conditional on not dying in other ways) each U.S. citizen’s risk of dying in this kind of catastrophe in the next decade is at least 5/9×1.85%≈1.03% (since, by our definition, a global catastrophe would kill at least 5 billion people, and the world population is projected to remain under 9 billion until 2033). According to projections of the U.S. population pyramid, 6.88% of U.S. citizens alive today will die in other ways over the course of the next decade.[10] That suggests that U.S. citizens alive today have on average about a 1% risk of being killed in a nuclear war, engineered pandemic, or AI disaster in the next decade. That is about ten times their risk of being killed in a car accident.[11]”
A lot of ink has been spilled on this, but I don’t get the sense that there’s a lot of agreement.
How much would a set of interventions cost: “We project that funding this suite of interventions for the next decade would cost less than $400 billion.[16]” — the footnote reads “The Biden administration’s 2023 Budget requests $88.2 billion over five years (The White House 2022c; U.S. Office of Management and Budget 2022). We can suppose that another five years of funding would require that much again. A Nucleic Acid Observatory covering the U.S. is estimated to cost $18.4 billion to establish and $10.4 billion per year to run (The Nucleic Acid Observatory Consortium 2021: 18). Ord (2020: 202–3) recommends increasing the budget of the Biological Weapons Convention to $80 million per year. Our listed interventions to reduce nuclear risk are unlikely to cost more than $10 billion for the decade. AI safety and governance might cost up to $10 billion as well. The total cost of these interventions for the decade would then be $319.6 billion.”
How much would the interventions reduce risk: “We also expect this suite of interventions to reduce the risk of global catastrophe over the next decade by at least 0.1pp (percentage points). A full defence of this claim would require more detail than we can fit in this chapter, but here is one way to illustrate the claim’s plausibility. Imagine an enormous set of worlds like our world in 2023. … We claim that in at least 1-in-1,000 of these worlds the interventions we recommend above would prevent a global catastrophe this decade. That is a low bar, and it seems plausible to us that the interventions above meet it.”
This seems under-argued. Without thinking too long about this, it’s probably the point in the model that I’d want to see more work on.
I also worry a bit that collecting interventions like this (and estimating cost-effectiveness for the whole bunch instead of individually) leads to issues like: funding interventions that aren’t cost-effective because they’re part of the group, not funding interventions that account for the bulk of the risk reduction because a group that’s advocating for funding these interventions gets a partial success that drops some particularly useful intervention (e.g. funding AI safety research), etc.
The value of a statistical life (VSL) (the value of saving one life in expectation via small reductions in mortality risks for many people): “The primary VSL figure used by the U.S. Department of Transportation for 2021 is $11.8 million, with a range to account for various kinds of uncertainty spanning from about $7 million to $16.5 million (U.S. Department of Transportation 2021a, 2021b).” (With a constant annual discount rate.) (Discussed here.)
Should the US fund these interventions? (Yes)
“given a world population of less than 9 billion and conditional on a global catastrophe occurring, each American’s risk of dying in that catastrophe is at least 5⁄9. Reducing GCR this decade by 0.1pp then reduces each American’s risk of death this decade by at least 0.055pp. Multiplying that figure by the U.S. population of 330 million, we get the result that reducing GCR this decade by 0.1pp saves at least 181,500 American lives in expectation. If that GCR-reduction were to occur this year, it would be worth at least $1.27 trillion on the Department of Transportation’s lowest VSL figure of $7 million. But since the GCR-reduction would occur over the course of a decade, cost-benefit analysis requires that we discount. If we use OIRA’s highest annual discount rate of 7% and suppose (conservatively) that all the costs of our interventions are paid up front while the GCR-reduction comes only at the end of the decade, we get the result that reducing GCR this decade by 0.1pp is worth at least $1.27 trillion / 1.0710= $646 billion. So, at a cost of $400 billion, these interventions comfortably pass a standard cost-benefit analysis test.[20] That in turn suggests that the U.S. government should fund these interventions. Doing so would save American lives more cost-effectively than many other forms of government spending on life-saving, such as transportation and environmental regulations. In fact, we can make a stronger argument. Using a projected U.S. population pyramid and some life-expectancy statistics, we can calculate that approximately 79% of the American life-years saved by preventing a global catastrophe in 2033 would accrue to Americans alive today in 2023 (Thornley 2022). 79% of $646 billion is approximately $510 billion. That means that funding this suite of GCR-reducing interventions is well worth it, even considering only the benefits to Americans alive today.[21]”
(The authors also flag that this pretty significantly underrates the cost-effectiveness of the interventions, etc. by not accounting for the fact that the interventions also decrease the risks from smaller catastrophes and by not accounting for the deaths of non-US citizens.)
3. Some excerpts from the argument about what longtermists should advocate for that I found insightful or important
“getting governments to adopt a CBA-driven catastrophe policy is not trivial. One barrier is psychological (Wiener 2016). Many of us find it hard to appreciate the likelihood and magnitude of a global catastrophe. Another is that GCR-reduction is a collective action problem for individuals. Although a safer world is in many people’s self-interest, working for a safer world is in few people’s self-interest. Doing so means bearing a large portion of the costs and gaining just a small portion of the benefits.[28] Politicians and regulators likewise lack incentives to advocate for GCR-reducing interventions (as they did with climate interventions in earlier decades). Given widespread ignorance of the risks, calls for such interventions are unlikely to win much public favour. / However, these barriers can be overcome.”
“getting the U.S. government to adopt a CBA-driven catastrophe policy would reduce existential risk by almost as much as getting them to adopt a strong longtermist policy. This is for two reasons. The first is that, at the current margin, the primary goals of a CBA-driven policy and a strong longtermist policy are substantially aligned. The second is that increased spending on preventing catastrophes yields steeply diminishing returns in terms of existential-risk-reduction.” (I appreciated the explanations given for the reasons.)
“At the moment, the world is spending very little on preventing global catastrophes. The U.S. spent approximately $3 billion on biosecurity in 2019 (Watson et al. 2018), and (in spite of the wake-up call provided by COVID-19) funding for preventing future pandemics has not increased much since then.[32] Much of this spending is ill-suited to combatting the most extreme biological threats. Spending on reducing GCR from AI is less than $100 million per year.[33]”
“here, we believe, is where longtermism should enter into government catastrophe policy. Longtermists should make the case for their view, and thereby increase citizens’ AWTP [altruistic willingness to pay] for pure longtermist goods like refuges.[38] When citizens are willing to pay for these goods, governments should fund them.”
“One might think that it is true only on the current margin and in public that longtermists should push governments to adopt a catastrophe policy guided by cost-benefit analysis and altruistic willingness to pay. [...] We disagree. Longtermists can try to increase government funding for catastrophe-prevention by making longtermist arguments and thereby increasing citizens’ AWTP, but they should not urge governments to depart from a CBA-plus-AWTP catastrophe policy. On the contrary, longtermists should as far as possible commit themselves to acting in accordance with a CBA-plus-AWTP policy in the political sphere. One reason why is simple: longtermists have moral reasons to respect the preferences of their fellow citizens. [Another reason why is that] the present generation may worry that longtermists would go too far. If granted imperfectly accountable power, longtermists might try to use the machinery of government to place burdens on the present generation for the sake of further benefits to future generations. These worries may lead to the marginalisation of longtermism, and thus an outcome that is worse for both present and future generations.”
Thanks for writing this! I’m curating it.
There are roughly two parts to the post:
a sketch cost-benefit analysis (CBA) for whether the US should fund interventions reducing global catastrophic risk (roughly sections 2-4)
an argument for why longtermists should push for a policy of funding all those GCR-reducing interventions that pass a cost-benefit analysis test and no more (except to the extent that a government should account for its citizens’ altruistic preferences, which in turn can be influenced by longtermism)
“That is because (1) unlike a strong longtermist policy, a CBA-driven policy would be democratically acceptable and feasible to implement, and (2) a CBA-driven policy would reduce existential risk by almost as much as a strong longtermist policy.”
I think the second part presents more novel arguments for readers of the Forum, but the first part is an interesting exercise, and important to sketch out to make the argument in part two.
Assorted thoughts below.
1. A graph
I want to flag a graph from further into the post that some people might miss (“The x-axis represents U.S. lives saved (discounted by how far in the future the life is saved) in expectation per dollar. The y-axis represents existential-risk-reduction per dollar. Interventions to the right of the blue line would be funded by a CBA-driven catastrophe policy. The exact position of each intervention is provisional and unimportant, and the graph is not to scale in any case… ”):
2. Outlining the cost-benefit analysis
I do feel like a lot of the numbers used for the sketch CBA are hard to defend, but I get the sense that you’re approaching those as givens, and then asking what e.g. people in the US government should do if they find the assumptions reasonable. At a brief skim, the support for “how much the interventions in question would reduce risk” seems to be the weakest (and I am a little worried about how this is approached — flagged below).
I’ve pulled out some fragments that produce a ~BOTEC for the cost-effectiveness of a set of interventions from the US government’s perspective (bold mine):
A “global catastrophe” is an event that kills at least 5 billion people. The model assumes that each person’s risk of dying in a global catastrophe is equal.
Overall risk of a global catastrophe: “Assuming independence and combining Ord’s risk-estimates of 10% for AI, 3% for engineered pandemics, and 5% for nuclear war gives us at least a 17% risk of global catastrophe from these sources over the next 100 years.[8] If we assume that the risk per decade is constant, the risk over the next decade is about 1.85%.[9] If we assume also that every person’s risk of dying in this kind of catastrophe is equal, then (conditional on not dying in other ways) each U.S. citizen’s risk of dying in this kind of catastrophe in the next decade is at least 5/9×1.85%≈1.03% (since, by our definition, a global catastrophe would kill at least 5 billion people, and the world population is projected to remain under 9 billion until 2033). According to projections of the U.S. population pyramid, 6.88% of U.S. citizens alive today will die in other ways over the course of the next decade.[10] That suggests that U.S. citizens alive today have on average about a 1% risk of being killed in a nuclear war, engineered pandemic, or AI disaster in the next decade. That is about ten times their risk of being killed in a car accident.[11]”
A lot of ink has been spilled on this, but I don’t get the sense that there’s a lot of agreement.
How much would a set of interventions cost: “We project that funding this suite of interventions for the next decade would cost less than $400 billion.[16]” — the footnote reads “The Biden administration’s 2023 Budget requests $88.2 billion over five years (The White House 2022c; U.S. Office of Management and Budget 2022). We can suppose that another five years of funding would require that much again. A Nucleic Acid Observatory covering the U.S. is estimated to cost $18.4 billion to establish and $10.4 billion per year to run (The Nucleic Acid Observatory Consortium 2021: 18). Ord (2020: 202–3) recommends increasing the budget of the Biological Weapons Convention to $80 million per year. Our listed interventions to reduce nuclear risk are unlikely to cost more than $10 billion for the decade. AI safety and governance might cost up to $10 billion as well. The total cost of these interventions for the decade would then be $319.6 billion.”
How much would the interventions reduce risk: “We also expect this suite of interventions to reduce the risk of global catastrophe over the next decade by at least 0.1pp (percentage points). A full defence of this claim would require more detail than we can fit in this chapter, but here is one way to illustrate the claim’s plausibility. Imagine an enormous set of worlds like our world in 2023. … We claim that in at least 1-in-1,000 of these worlds the interventions we recommend above would prevent a global catastrophe this decade. That is a low bar, and it seems plausible to us that the interventions above meet it.”
This seems under-argued. Without thinking too long about this, it’s probably the point in the model that I’d want to see more work on.
I also worry a bit that collecting interventions like this (and estimating cost-effectiveness for the whole bunch instead of individually) leads to issues like: funding interventions that aren’t cost-effective because they’re part of the group, not funding interventions that account for the bulk of the risk reduction because a group that’s advocating for funding these interventions gets a partial success that drops some particularly useful intervention (e.g. funding AI safety research), etc.
The value of a statistical life (VSL) (the value of saving one life in expectation via small reductions in mortality risks for many people): “The primary VSL figure used by the U.S. Department of Transportation for 2021 is $11.8 million, with a range to account for various kinds of uncertainty spanning from about $7 million to $16.5 million (U.S. Department of Transportation 2021a, 2021b).” (With a constant annual discount rate.) (Discussed here.)
Should the US fund these interventions? (Yes)
“given a world population of less than 9 billion and conditional on a global catastrophe occurring, each American’s risk of dying in that catastrophe is at least 5⁄9. Reducing GCR this decade by 0.1pp then reduces each American’s risk of death this decade by at least 0.055pp. Multiplying that figure by the U.S. population of 330 million, we get the result that reducing GCR this decade by 0.1pp saves at least 181,500 American lives in expectation. If that GCR-reduction were to occur this year, it would be worth at least $1.27 trillion on the Department of Transportation’s lowest VSL figure of $7 million. But since the GCR-reduction would occur over the course of a decade, cost-benefit analysis requires that we discount. If we use OIRA’s highest annual discount rate of 7% and suppose (conservatively) that all the costs of our interventions are paid up front while the GCR-reduction comes only at the end of the decade, we get the result that reducing GCR this decade by 0.1pp is worth at least $1.27 trillion / 1.0710= $646 billion. So, at a cost of $400 billion, these interventions comfortably pass a standard cost-benefit analysis test.[20] That in turn suggests that the U.S. government should fund these interventions. Doing so would save American lives more cost-effectively than many other forms of government spending on life-saving, such as transportation and environmental regulations. In fact, we can make a stronger argument. Using a projected U.S. population pyramid and some life-expectancy statistics, we can calculate that approximately 79% of the American life-years saved by preventing a global catastrophe in 2033 would accrue to Americans alive today in 2023 (Thornley 2022). 79% of $646 billion is approximately $510 billion. That means that funding this suite of GCR-reducing interventions is well worth it, even considering only the benefits to Americans alive today.[21]”
(The authors also flag that this pretty significantly underrates the cost-effectiveness of the interventions, etc. by not accounting for the fact that the interventions also decrease the risks from smaller catastrophes and by not accounting for the deaths of non-US citizens.)
3. Some excerpts from the argument about what longtermists should advocate for that I found insightful or important
“getting governments to adopt a CBA-driven catastrophe policy is not trivial. One barrier is psychological (Wiener 2016). Many of us find it hard to appreciate the likelihood and magnitude of a global catastrophe. Another is that GCR-reduction is a collective action problem for individuals. Although a safer world is in many people’s self-interest, working for a safer world is in few people’s self-interest. Doing so means bearing a large portion of the costs and gaining just a small portion of the benefits.[28] Politicians and regulators likewise lack incentives to advocate for GCR-reducing interventions (as they did with climate interventions in earlier decades). Given widespread ignorance of the risks, calls for such interventions are unlikely to win much public favour. / However, these barriers can be overcome.”
“getting the U.S. government to adopt a CBA-driven catastrophe policy would reduce existential risk by almost as much as getting them to adopt a strong longtermist policy. This is for two reasons. The first is that, at the current margin, the primary goals of a CBA-driven policy and a strong longtermist policy are substantially aligned. The second is that increased spending on preventing catastrophes yields steeply diminishing returns in terms of existential-risk-reduction.” (I appreciated the explanations given for the reasons.)
“At the moment, the world is spending very little on preventing global catastrophes. The U.S. spent approximately $3 billion on biosecurity in 2019 (Watson et al. 2018), and (in spite of the wake-up call provided by COVID-19) funding for preventing future pandemics has not increased much since then.[32] Much of this spending is ill-suited to combatting the most extreme biological threats. Spending on reducing GCR from AI is less than $100 million per year.[33]”
“here, we believe, is where longtermism should enter into government catastrophe policy. Longtermists should make the case for their view, and thereby increase citizens’ AWTP [altruistic willingness to pay] for pure longtermist goods like refuges.[38] When citizens are willing to pay for these goods, governments should fund them.”
“One might think that it is true only on the current margin and in public that longtermists should push governments to adopt a catastrophe policy guided by cost-benefit analysis and altruistic willingness to pay. [...] We disagree. Longtermists can try to increase government funding for catastrophe-prevention by making longtermist arguments and thereby increasing citizens’ AWTP, but they should not urge governments to depart from a CBA-plus-AWTP catastrophe policy. On the contrary, longtermists should as far as possible commit themselves to acting in accordance with a CBA-plus-AWTP policy in the political sphere. One reason why is simple: longtermists have moral reasons to respect the preferences of their fellow citizens. [Another reason why is that] the present generation may worry that longtermists would go too far. If granted imperfectly accountable power, longtermists might try to use the machinery of government to place burdens on the present generation for the sake of further benefits to future generations. These worries may lead to the marginalisation of longtermism, and thus an outcome that is worse for both present and future generations.”