A lot of people within the effective altruist movement seem to basically agree with you. For example, Will MacAskill, one of the founders of the effective altruist movement, has recently said he’s only going to focus on artificial general intelligence (AGI) from now on. The effective altruist organization 80,000 Hours has said more or less the same — their main focus is going to be AGI. For many others in the EA movement, AGI is their top priority and the only thing they focus on.
So, basically, you are making an argument for which there is already a lot of agreement in EA circles.
As you pointed out, uncertainty about the timeline of AGI and doubts about very near-term AGI are one of the main reasons to focus on global poverty, animal welfare, or other cause areas not related to AGI.
There is no consensus on when AGI will happen.
A 2023 survey of AI experts found they believed there is a 50% chance of AI and AI-powered robots being able to automate all human jobs by 2116. (Edited on 2025-05-05 at 06:16 UTC: I should have mentioned the same study also asked the experts when they think AI will be able to do all tasks that a human can do. The aggregated prediction was a 50% chance by 2047. We don’t know for sure why they gave such different predictions for these two similar questions.)
In 2022, a group of 31 superforecasters predicted a 50% chance of AGI by 2081.
My personal belief is that we have no idea how to create AGI and we have no idea when we’ll figure out how to create it. In addition to the expert and superforecaster predictions I just mentioned, I recently wrote a rapid fire list of reasons I think predictions of AGI within 5 years are extremely dubious.
A lot of people within the effective altruist movement seem to basically agree with you. For example, Will MacAskill, one of the founders of the effective altruist movement, has recently said he’s only going to focus on artificial general intelligence (AGI) from now on. The effective altruist organization 80,000 Hours has said more or less the same — their main focus is going to be AGI. For many others in the EA movement, AGI is their top priority and the only thing they focus on.
So, basically, you are making an argument for which there is already a lot of agreement in EA circles.
As you pointed out, uncertainty about the timeline of AGI and doubts about very near-term AGI are one of the main reasons to focus on global poverty, animal welfare, or other cause areas not related to AGI.
There is no consensus on when AGI will happen.
A 2023 survey of AI experts found they believed there is a 50% chance of AI and AI-powered robots being able to automate all human jobs by 2116. (Edited on 2025-05-05 at 06:16 UTC: I should have mentioned the same study also asked the experts when they think AI will be able to do all tasks that a human can do. The aggregated prediction was a 50% chance by 2047. We don’t know for sure why they gave such different predictions for these two similar questions.)
In 2022, a group of 31 superforecasters predicted a 50% chance of AGI by 2081.
My personal belief is that we have no idea how to create AGI and we have no idea when we’ll figure out how to create it. In addition to the expert and superforecaster predictions I just mentioned, I recently wrote a rapid fire list of reasons I think predictions of AGI within 5 years are extremely dubious.