Does urgency (point 2) apply to global health specifically, given the debate topic of animal welfare vs global health?
Maybe we can consider biorisk, including biorisk from TAI (EDIT: and other ways we might all die, and other GCRs), to fit inside global health, but I don’t think that’s what’s usually intended.
Global health is about the lives of humans and human suffering. It seems to me that AI safety is the #1 global health issue at large in our current world.
But considering that you mean ‘health interventions for poor people’, how do you separate that from AI risk? If you have good reason to believe that if you fail to act then the person will be killed in less than a decade, and so will all animals, all life on Earth… Seems odd to me to be putting ‘treat curable diseases of human population x’ into a different bucket than ‘keep human population x from being murdered’. Aren’t these both health interventions? Don’t they both deliver QUALYs?
I agree you can consider them “health interventions”, but I think what people have in mind by global health in general and in this debate are mostly GiveWell recommendations, and maybe other cause areas in Open Phil’s Global Health and Wellbeing focus areas, which are separate from global catastrophic risks (GCRs). Maybe the line is somewhat artificial.
One reason to separate GCRs from global health is that GCRs and GCR interventions seem very one-shot,[1] poorer evidenced and much more speculative than many global health interventions, like GiveWell recommendations. If you want to be more sure you’re making a difference,[2] GiveWell recommendations seem better for that.
Betting around whether a global catastrophe occurs at all, with highly correlated individual outcomes, not individual deaths separately, e.g. one case of malaria prevented.
Although perhaps a very different difference from what GiveWell estimates, since they don’t account for the possibility that we all get killed by AI, or that the lives we save today go on for hundreds of years due to technological advances.
Well, if AI goes well, things on my short list for what to focus on next with the incredible power unlocked by this unprecedentedly large acceleration in technological development are: alleviating all material poverty, curing all diseases, extending human life, and (as a lower priority) ending cruel factory farming practices. This critical juncture isn’t just about preventing a harm, it’s a fork in the road that goes either to catastrophe or huge wins on every current challenge. Of course, new challenges then arise, such as questions of offense-defense balance in technological advancements, rights of digital beings, government surveillance, etc.
Does urgency (point 2) apply to global health specifically, given the debate topic of animal welfare vs global health?
Maybe we can consider biorisk, including biorisk from TAI (EDIT: and other ways we might all die, and other GCRs), to fit inside global health, but I don’t think that’s what’s usually intended.
Global health is about the lives of humans and human suffering. It seems to me that AI safety is the #1 global health issue at large in our current world.
But considering that you mean ‘health interventions for poor people’, how do you separate that from AI risk? If you have good reason to believe that if you fail to act then the person will be killed in less than a decade, and so will all animals, all life on Earth… Seems odd to me to be putting ‘treat curable diseases of human population x’ into a different bucket than ‘keep human population x from being murdered’. Aren’t these both health interventions? Don’t they both deliver QUALYs?
I agree you can consider them “health interventions”, but I think what people have in mind by global health in general and in this debate are mostly GiveWell recommendations, and maybe other cause areas in Open Phil’s Global Health and Wellbeing focus areas, which are separate from global catastrophic risks (GCRs). Maybe the line is somewhat artificial.
One reason to separate GCRs from global health is that GCRs and GCR interventions seem very one-shot,[1] poorer evidenced and much more speculative than many global health interventions, like GiveWell recommendations. If you want to be more sure you’re making a difference,[2] GiveWell recommendations seem better for that.
Betting around whether a global catastrophe occurs at all, with highly correlated individual outcomes, not individual deaths separately, e.g. one case of malaria prevented.
Although perhaps a very different difference from what GiveWell estimates, since they don’t account for the possibility that we all get killed by AI, or that the lives we save today go on for hundreds of years due to technological advances.
Well, if AI goes well, things on my short list for what to focus on next with the incredible power unlocked by this unprecedentedly large acceleration in technological development are: alleviating all material poverty, curing all diseases, extending human life, and (as a lower priority) ending cruel factory farming practices. This critical juncture isn’t just about preventing a harm, it’s a fork in the road that goes either to catastrophe or huge wins on every current challenge. Of course, new challenges then arise, such as questions of offense-defense balance in technological advancements, rights of digital beings, government surveillance, etc.
Edit: for additional details on the changes I expect in the world if AI goes well, please see: https://darioamodei.com/machines-of-loving-grace