Thanks for sharing this. While I think there are strong reasons to invest heavily in AI safety, I’m concerned this particular cost-benefit framing may not be as compelling as it initially appears.
The paper uses a $10 million value of statistical life (VSL) to justify spending $100,000 per person to avoid a 1% mortality risk. However, if we’re being consistent with cost-effectiveness reasoning, we should note that GiveWell-recommended charities save lives in the developing world for approximately $5,000 each—roughly 2,000 times cheaper per life saved.
By this logic, the same funding directed toward global health interventions would save orders of magnitude more lives with near-certainty, compared to reducing AI x-risk with uncertain probability.
This doesn’t mean AI safety is a bad investment—there are strong arguments based on:
The value of preserving future generations (which Jones notes would increase spending estimates)
Diminishing returns or bottlenecks in scaling proven global health interventions
The categorical importance of preventing existential catastrophe
Portfolio diversification across different types of risk
(Note: comment generated in collaboration with AI)
I completely agree with your comment. However my interpretation of what Professor Jones is trying to do is slightly different from straightforward cause prioritisation in the EA sense.
I think he is trying to frame AI risk reduction in a way that is compelling to policymakers, by focusing on standard benchmark values (Value of a Statistical Life), and limiting his analysis in space (only ‘valuing’ lives of American citizens) and time (only the next 20 years). This puts the report in line with standard government Cost Benefit Analyses, which may make it more convincing for those who have access to policy levers.
Thanks for sharing this. While I think there are strong reasons to invest heavily in AI safety, I’m concerned this particular cost-benefit framing may not be as compelling as it initially appears.
The paper uses a $10 million value of statistical life (VSL) to justify spending $100,000 per person to avoid a 1% mortality risk. However, if we’re being consistent with cost-effectiveness reasoning, we should note that GiveWell-recommended charities save lives in the developing world for approximately $5,000 each—roughly 2,000 times cheaper per life saved.
By this logic, the same funding directed toward global health interventions would save orders of magnitude more lives with near-certainty, compared to reducing AI x-risk with uncertain probability.
This doesn’t mean AI safety is a bad investment—there are strong arguments based on:
The value of preserving future generations (which Jones notes would increase spending estimates)
Diminishing returns or bottlenecks in scaling proven global health interventions
The categorical importance of preventing existential catastrophe
Portfolio diversification across different types of risk
(Note: comment generated in collaboration with AI)
I completely agree with your comment. However my interpretation of what Professor Jones is trying to do is slightly different from straightforward cause prioritisation in the EA sense.
I think he is trying to frame AI risk reduction in a way that is compelling to policymakers, by focusing on standard benchmark values (Value of a Statistical Life), and limiting his analysis in space (only ‘valuing’ lives of American citizens) and time (only the next 20 years). This puts the report in line with standard government Cost Benefit Analyses, which may make it more convincing for those who have access to policy levers.