It seems like even the AMF vs global catastrophic risk comparison on an ex ante greater burden principle will depend on how much we’re funding them, how we individuate acts and the specifics of the risks involved. To summarize, if you invest enough in global catastrophic risk mitigation, you might be able to reduce the maximum risk of very early death for at least one individual by more than if you give the same amount all to AMF, because malaria mortality rates are <1% per year where AMF works (GiveWell’s sheet), but extinction risk within the next few decades could be higher than that (mostly due to AI) and reducible in absolute terms by more than 1 percentage point with enough funding. On the other hand, some people may be identified as immunocompromised and so stand to gain much more than a 1 percentage point reduction in mortality risk from GiveWell recommendations.
I illustrate in more detail with the rest of this comment, but feel free to skip if this is already clear enough.
Where AMF works, the annual mortality rate by malaria is typically under 0.3% (see GiveWell’s sheet) and the nets last about two years (again GiveWell), so we get a maximum of around 0.6% average risk reduction per distribution of bednets (and malaria medicine, from Malaria Consortium, say). Now, maybe there are people who are particularly prone to death if they catch malaria and are identifiable as such, e.g. the identified immunocompromised. How high can the maximum ex ante risk be across individuals? I don’t know, but this could matter. Let’s say it’s 1%. I think it could be much higher, but let’s go with that to illustrate here first.
With up to $100,000 dollars donated to AMF and Malaria Consortium, suppose then we can practically eliminate one such person’s risk of death, dropping it from 1% to around 0% (if we know where AMF and MC will work with that extra funding). On the other hand, it seems hard to see how only $100,000 dollars targeted at catastrophic risks could reduce anyone’s risk of death by 1 percentage point. That would fund at most something like 2 people working full-time for a year, and probably less than 1 at most organizations working on x-risk, given current salaries. That will be true separately for the next $100,000, and the next, and the next, and so on, probably up to at least the endowment of Open Phil.
However, what about all of what Open Phil is granting to GiveWell, $100 million/year (source), all together, rather than $100K at a time? That still, by assumption, only gives a 1 percentage point reduction in mortality across the beneficiaries of GiveWell recommendations, if malaria mortality rates are representative (it seems it could be somewhat higher for Helen Keller International, and somewhat higher for New Incentives, if we account for individuals at increase personal risk for those, too, and that covers the rest of GiveWell’s top charities). Can we reduce global catastrophic risks by more than 1 percentage point with a $100 million? What about $100 million/year over multiple years? I think many concerned with AI risk would say yes. And it might even be better for those who would otherwise receive bednets to protect them from malaria.
Now, malaria incidence can be as high as around 300 cases per 1000 people a given year in some places where AMF works (Our World in Data). If the identified immunocompromised have a 50% chance of dying from malaria if they catch it, then a naive[1] risk reduction estimate could be something like 15 percentage points. It seems hard to reduce extinction risk or anyone’s risk of death from a global catastrophe by that much in absolute terms (percentage points). For one, you need to believe the risk is at least 15%. And the ones with high risk estimates (>85%) from AI tend to be pessimistic about our ability to reduce it much. I’d guess only a minority of those working on x-risk believe we can reduce it 15 percentage points with all of Open Phil’s endowment. You have to be in a sweet spot of “there’s a good chance this won’t go well by default, but ~most of that is avertable”.
And, on the other hand, AI pause work in particular could mean some people will definitely die who would otherwise have had a chance of survival and a very long life through AI-aided R&D on diseases, aging and mind uploading.
One might expect the immunocompromised to be extra careful and buy bednets themselves or have bednets bought for them by their family. Also, some of those 300 cases per 1000 could be multiple cases in the same person in a year.
This is the right place to press, Michael. These are exactly the probabilities that matter. Because I tend to be pretty pessimistic about our ability to reduce AI risk, I tend to think the numbers are going to break in favor of AMF. And on top of that, if you’re worried that x-risk mitigation work might sometimes increase x-risk, even a mild level of risk aversion will probably skew things toward AMF more strongly. But it’s important to bring these things out. Thanks for flagging.
It seems like even the AMF vs global catastrophic risk comparison on an ex ante greater burden principle will depend on how much we’re funding them, how we individuate acts and the specifics of the risks involved. To summarize, if you invest enough in global catastrophic risk mitigation, you might be able to reduce the maximum risk of very early death for at least one individual by more than if you give the same amount all to AMF, because malaria mortality rates are <1% per year where AMF works (GiveWell’s sheet), but extinction risk within the next few decades could be higher than that (mostly due to AI) and reducible in absolute terms by more than 1 percentage point with enough funding. On the other hand, some people may be identified as immunocompromised and so stand to gain much more than a 1 percentage point reduction in mortality risk from GiveWell recommendations.
I illustrate in more detail with the rest of this comment, but feel free to skip if this is already clear enough.
Where AMF works, the annual mortality rate by malaria is typically under 0.3% (see GiveWell’s sheet) and the nets last about two years (again GiveWell), so we get a maximum of around 0.6% average risk reduction per distribution of bednets (and malaria medicine, from Malaria Consortium, say). Now, maybe there are people who are particularly prone to death if they catch malaria and are identifiable as such, e.g. the identified immunocompromised. How high can the maximum ex ante risk be across individuals? I don’t know, but this could matter. Let’s say it’s 1%. I think it could be much higher, but let’s go with that to illustrate here first.
With up to $100,000 dollars donated to AMF and Malaria Consortium, suppose then we can practically eliminate one such person’s risk of death, dropping it from 1% to around 0% (if we know where AMF and MC will work with that extra funding). On the other hand, it seems hard to see how only $100,000 dollars targeted at catastrophic risks could reduce anyone’s risk of death by 1 percentage point. That would fund at most something like 2 people working full-time for a year, and probably less than 1 at most organizations working on x-risk, given current salaries. That will be true separately for the next $100,000, and the next, and the next, and so on, probably up to at least the endowment of Open Phil.
However, what about all of what Open Phil is granting to GiveWell, $100 million/year (source), all together, rather than $100K at a time? That still, by assumption, only gives a 1 percentage point reduction in mortality across the beneficiaries of GiveWell recommendations, if malaria mortality rates are representative (it seems it could be somewhat higher for Helen Keller International, and somewhat higher for New Incentives, if we account for individuals at increase personal risk for those, too, and that covers the rest of GiveWell’s top charities). Can we reduce global catastrophic risks by more than 1 percentage point with a $100 million? What about $100 million/year over multiple years? I think many concerned with AI risk would say yes. And it might even be better for those who would otherwise receive bednets to protect them from malaria.
Now, malaria incidence can be as high as around 300 cases per 1000 people a given year in some places where AMF works (Our World in Data). If the identified immunocompromised have a 50% chance of dying from malaria if they catch it, then a naive[1] risk reduction estimate could be something like 15 percentage points. It seems hard to reduce extinction risk or anyone’s risk of death from a global catastrophe by that much in absolute terms (percentage points). For one, you need to believe the risk is at least 15%. And the ones with high risk estimates (>85%) from AI tend to be pessimistic about our ability to reduce it much. I’d guess only a minority of those working on x-risk believe we can reduce it 15 percentage points with all of Open Phil’s endowment. You have to be in a sweet spot of “there’s a good chance this won’t go well by default, but ~most of that is avertable”.
And, on the other hand, AI pause work in particular could mean some people will definitely die who would otherwise have had a chance of survival and a very long life through AI-aided R&D on diseases, aging and mind uploading.
One might expect the immunocompromised to be extra careful and buy bednets themselves or have bednets bought for them by their family. Also, some of those 300 cases per 1000 could be multiple cases in the same person in a year.
This is the right place to press, Michael. These are exactly the probabilities that matter. Because I tend to be pretty pessimistic about our ability to reduce AI risk, I tend to think the numbers are going to break in favor of AMF. And on top of that, if you’re worried that x-risk mitigation work might sometimes increase x-risk, even a mild level of risk aversion will probably skew things toward AMF more strongly. But it’s important to bring these things out. Thanks for flagging.