Suppose it takes $100 billion to increase our chance of completely averting extinction (or the equivalent) by 0.1%. By this, I don’t mean averting an extinction event by having it be an event that only kills 98% of people, or preventing the disempowerment of humanity due to AI; I mean that we save the entire world’s population. For convenience, I’ll assume no diminishing marginal returns. If we only consider the 7 generations of lost wellbeing after the event, and compute $100 billion / (7 * 8 billion * 0.1%), then we get a cost-effectiveness of $1,780 to save a life. With the additional downside of being extremely uncertain, this estimate is only in the same ballpark as the Malaria Consortium’s seasonal chemoprevention program (which takes ~$4,500 to save a life). It’s also complicated by the fact that near-term animal charities, etc. are funding-constrained while longtermist orgs are not so much. Unlike a strong longtermist view, it’s not at all clear under this view that it would be worthwhile to pivot your career to AI safety or biorisk, instead of taking the more straightforward route of earning to give to standard near-term interventions.
My best estimate of price to decrease extinction risk by 0.1% is under $10B. Linch has only thought about this for a few hours, but he’s pretty well informed on the state of megaprojects, plus others have thought more than that. This is consistent with my own estimates too.
One thing I find really tricky about this is figuring out where the margin will end up in the future.
It seems likely to me that $100bn will be spent on x-risk reduction over the next 100 years irrespective of what I do. My efforts mainly top up that pot.
Personally I expect the next $10bn might well reduce x-risk by ~1% rather than 0.1%; but it’ll be far less once we get into the next $90bn and then $100bn after it. It might well be a lot less than 0.1% per $10bn billion.
Yes this is a really good point. I meant to make it when I first read Thomas’ comment but then forgot about this as I was typing up my own comment.
I think
it’ll be far less once we get into the next $90bn and then $100bn after it. It might well be a lot less than 0.1% per $10bn billion.
Might be a plausible position after the movement has a few more years of experience and researchers have put a few thousand hours of research and further thinking into this question, but right now we (or at least I) don’t have a strong enough understanding of the landscape to confidently believe in very low cost-effectiveness for the last dollar. In slightly more mathy terms, we might have a bunch of different cost-effectiveness distributions in the ensemble that forms our current prior, which means we can’t go very low (or high) if we do a weighted average across them.
(I thought about it for a few more hours and haven’t changed my numbers much).
I think it’s worth highlighting that our current empirical best guesses (with a bunch of uncertainty) is that catastrophic risk mitigation measures are probably better in expectation than near-term global health interventions, even if you only care about currently alive people.
But on the other hand, it’s also worth highlighting that you only have 1-2 OOMs to work with, so if we only care about present people, the variance is high enough that we can easily change our minds in the future. Also, e.g. community building interventions or other “meta” interventions in global health (e.g. US foreign aid research and advocacy) may be better even on our current best guesses. Neartermist animal interventions may be more compelling as well.
Finally, what axilogies you have would have implications for what you should focus on within GCR work. Because I’m personally more compelled by the longtermist arguments for existential risk reduction than neartermist ones, I’m personally comparatively more excited about disaster mitigation, robustness/resilience, and recovery, not just prevention. Whereas I expect the neartermist morals + empirical beliefs about GCRs + risk-neutrality should lead you to believe that prevention and mitigation is worthwhile, but comparatively little resources should be invested in disaster resilience and recovery for extreme disasters.
I think your content and speculation in your comment was both principled and your right to say. My guess is that a comment that comes close to saying that an EA cause area has a different EV per dollar than others can get this sort of response.
This is a complex topic. Here’s some rambling, verbose thoughts , that might be wrong, and that you and others have might have already thought about:
This post exposes surface area for “disagreement of underlying values” in EA.
Some people don’t like a lot of math or ornate theories. For someone who is worried that the cause area representing their values is being affected, it can be easy to perceive adding a lot of math or theories as overbearing.
In certain situations, I believe “underlying values” drive a large amount of the karma of posts and comments, boosting messages whose content otherwise doesn’t warrant it. I think this is important to note, as it reduces communication, and can be hard to fix (or even observe) and one reason it is good to give this some attention or “work on this”[1].
I don’t think content or karma on the EA forum has a direct, simple relationship to all EA opinion or opinion of those that work in EA areas. However, I know someone who has information and models about related issues and opinions from EA’s “offline” and I think this suggests these disagreements are far from an artifact of the forum or “very online”.
I see the underlying issues as tractable and fixable.
There is a lot of writing in this comment, but this comes from a different perspective as a commenter. For a commenter, I think if they take the issues too seriously, I think it can be overbearing and make it unfairly hard to write things.
As a commenter, if they wanted to address this, talking to a few specific people and listening can help.
It’s not at all clear under this view that it would be worthwhile to pivot your career to AI safety or biorisk, instead of taking the more straightforward route of earning to give to standard near-term interventions.
I’d disagree with this. I think the conversion of money to labour is super inefficient on longtermism, and so this analogy breaks down. Sure, maybe I should donate to the Maximum Impact Fund rather than LTFF. But it’s really hard to usefully convert billions of dollars into useful labour on longtermist stuff. So, as someone who can work on AI Safety, there’s a major inefficiency factor if I pivot to ETG. I think the consensus basically already is that ETG for longtermism is rarely worth it, unless you’re incredibly good at ETG.
Suppose it takes $100 billion to increase our chance of completely averting extinction (or the equivalent) by 0.1%. By this, I don’t mean averting an extinction event by having it be an event that only kills 98% of people, or preventing the disempowerment of humanity due to AI; I mean that we save the entire world’s population. For convenience, I’ll assume no diminishing marginal returns. If we only consider the 7 generations of lost wellbeing after the event, and compute $100 billion / (7 * 8 billion * 0.1%), then we get a cost-effectiveness of $1,780 to save a life. With the additional downside of being extremely uncertain, this estimate is only in the same ballpark as the Malaria Consortium’s seasonal chemoprevention program (which takes ~$4,500 to save a life). It’s also complicated by the fact that near-term animal charities, etc. are funding-constrained while longtermist orgs are not so much. Unlike a strong longtermist view, it’s not at all clear under this view that it would be worthwhile to pivot your career to AI safety or biorisk, instead of taking the more straightforward route of earning to give to standard near-term interventions.
My best estimate of price to decrease extinction risk by 0.1% is under $10B. Linch has only thought about this for a few hours, but he’s pretty well informed on the state of megaprojects, plus others have thought more than that. This is consistent with my own estimates too.
One thing I find really tricky about this is figuring out where the margin will end up in the future.
It seems likely to me that $100bn will be spent on x-risk reduction over the next 100 years irrespective of what I do. My efforts mainly top up that pot.
Personally I expect the next $10bn might well reduce x-risk by ~1% rather than 0.1%; but it’ll be far less once we get into the next $90bn and then $100bn after it. It might well be a lot less than 0.1% per $10bn billion.
Yes this is a really good point. I meant to make it when I first read Thomas’ comment but then forgot about this as I was typing up my own comment.
I think
Might be a plausible position after the movement has a few more years of experience and researchers have put a few thousand hours of research and further thinking into this question, but right now we (or at least I) don’t have a strong enough understanding of the landscape to confidently believe in very low cost-effectiveness for the last dollar. In slightly more mathy terms, we might have a bunch of different cost-effectiveness distributions in the ensemble that forms our current prior, which means we can’t go very low (or high) if we do a weighted average across them.
The point about averaging over several cost-effective distributions is interesting!
If you find the analogy helpful, my comment here mirrors Toby’s on why having a mixed prior on the Hinge of History question is reasonable.
(I thought about it for a few more hours and haven’t changed my numbers much).
I think it’s worth highlighting that our current empirical best guesses (with a bunch of uncertainty) is that catastrophic risk mitigation measures are probably better in expectation than near-term global health interventions, even if you only care about currently alive people.
But on the other hand, it’s also worth highlighting that you only have 1-2 OOMs to work with, so if we only care about present people, the variance is high enough that we can easily change our minds in the future. Also, e.g. community building interventions or other “meta” interventions in global health (e.g. US foreign aid research and advocacy) may be better even on our current best guesses. Neartermist animal interventions may be more compelling as well.
Finally, what axilogies you have would have implications for what you should focus on within GCR work. Because I’m personally more compelled by the longtermist arguments for existential risk reduction than neartermist ones, I’m personally comparatively more excited about disaster mitigation, robustness/resilience, and recovery, not just prevention. Whereas I expect the neartermist morals + empirical beliefs about GCRs + risk-neutrality should lead you to believe that prevention and mitigation is worthwhile, but comparatively little resources should be invested in disaster resilience and recovery for extreme disasters.
Why was this comment downvoted a bunch?
Here you go:
I think your content and speculation in your comment was both principled and your right to say. My guess is that a comment that comes close to saying that an EA cause area has a different EV per dollar than others can get this sort of response.
This is a complex topic. Here’s some rambling, verbose thoughts , that might be wrong, and that you and others have might have already thought about:
This post exposes surface area for “disagreement of underlying values” in EA.
Some people don’t like a lot of math or ornate theories. For someone who is worried that the cause area representing their values is being affected, it can be easy to perceive adding a lot of math or theories as overbearing.
In certain situations, I believe “underlying values” drive a large amount of the karma of posts and comments, boosting messages whose content otherwise doesn’t warrant it. I think this is important to note, as it reduces communication, and can be hard to fix (or even observe) and one reason it is good to give this some attention or “work on this”[1].
I don’t think content or karma on the EA forum has a direct, simple relationship to all EA opinion or opinion of those that work in EA areas. However, I know someone who has information and models about related issues and opinions from EA’s “offline” and I think this suggests these disagreements are far from an artifact of the forum or “very online”.
I see the underlying issues as tractable and fixable.
There is a lot of writing in this comment, but this comes from a different perspective as a commenter. For a commenter, I think if they take the issues too seriously, I think it can be overbearing and make it unfairly hard to write things.
As a commenter, if they wanted to address this, talking to a few specific people and listening can help.
I think I have some insight because of this project, but it is not easy for me to immediately explain.
I mentioned this before, but again I don’t think strong-upvoting comments asking why they received downvotes from others is appropriate!
I’d disagree with this. I think the conversion of money to labour is super inefficient on longtermism, and so this analogy breaks down. Sure, maybe I should donate to the Maximum Impact Fund rather than LTFF. But it’s really hard to usefully convert billions of dollars into useful labour on longtermist stuff. So, as someone who can work on AI Safety, there’s a major inefficiency factor if I pivot to ETG. I think the consensus basically already is that ETG for longtermism is rarely worth it, unless you’re incredibly good at ETG.