A 10% chance of a million people dying is as bad as 100,000 people dying with certainty, if you’re risk-neutral. Essentially that’s the main argument for working on a speculative cause like AGI—if there’s a small chance of the end of humanity, that still matters a great deal.
As for “Won’t other people take care of this”, well...you could make that same argument about global health and development, too. More people is good for increasing potential impact of both fields.
(Also worth noting—EA as a whole does devote a lot of resources to global health and development, you just don’t see as many posts about it because there’s less to discuss/argue about)
Won’t other people take care of this—why should I additionally care?
I can’t track it down, but there is a tweet by I think Holden who runs OpenPhil where he says that people sometimes tell him that he’s got it covered and he wants to shout “NO WE DON’T”. We’re very very far from a safe path to a good future. It is a hard problem and we’re not rising to the challenge as a species.
Why should you care? You and everyone you’ve ever known will die if we get this wrong.
But people are already dying—no mights required, why not focus on more immediate problems like global health and development
A 10% chance of a million people dying is as bad as 100,000 people dying with certainty, if you’re risk-neutral. Essentially that’s the main argument for working on a speculative cause like AGI—if there’s a small chance of the end of humanity, that still matters a great deal.
As for “Won’t other people take care of this”, well...you could make that same argument about global health and development, too. More people is good for increasing potential impact of both fields.
(Also worth noting—EA as a whole does devote a lot of resources to global health and development, you just don’t see as many posts about it because there’s less to discuss/argue about)