I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.
Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.
estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace.
“only” 5-10% of ~8 billion people dying this century is still 400-800 million deaths! Certainly higher than e.g. estimates of malarial deaths within this century!
What’s the case for climate change being highly concerning from a near-termist perspective? It seems unlikely to me that marginal $s in fighting climate change are a better investment in global health than marginal $s spent directly on global health. And also particularly unlikely to be killing >400 million people.
I agree some biosecurity spending may be more cost-effective on neartermist grounds.
Hmm.. I’d have to think more carefully about it. Was very much off-the-cuff. I mostly agree with your criticism, I think I was mainly thinking bio-risk makes most sense as a near-termist priority and so would get most of x-risk funding until solved, since it is much more tractable than AI Risk.
Maybe this is the main point I’m trying to make, and so the spirit of the post seems off, since near-termist x-risky stuff would mostly fund bio-risk and long-termist x-risky stuff would mostly go to AI.
I think this is post is mistaken. (If I remember correctly, not an expert,) estimates that AI will kill us all are put around only 5-10% by AI experts and attendees at an x-risk conference in a paper from Katja Grace. Only AI Safety researchers think AI doom is a highly likely default (presumably due to selection effects.) So from near-termist perspective AI deserves relatively less attention.
Bio-risk and climate change, and maybe nuclear war, on the other hand, I think are all highly concerning from a near-termist perspective, but unlikely to kill EVERYONE, and so relatively low priority for long-termists.
“only” 5-10% of ~8 billion people dying this century is still 400-800 million deaths! Certainly higher than e.g. estimates of malarial deaths within this century!
What’s the case for climate change being highly concerning from a near-termist perspective? It seems unlikely to me that marginal $s in fighting climate change are a better investment in global health than marginal $s spent directly on global health. And also particularly unlikely to be killing >400 million people.
I agree some biosecurity spending may be more cost-effective on neartermist grounds.
Hmm.. I’d have to think more carefully about it. Was very much off-the-cuff. I mostly agree with your criticism, I think I was mainly thinking bio-risk makes most sense as a near-termist priority and so would get most of x-risk funding until solved, since it is much more tractable than AI Risk.
Maybe this is the main point I’m trying to make, and so the spirit of the post seems off, since near-termist x-risky stuff would mostly fund bio-risk and long-termist x-risky stuff would mostly go to AI.