In this recent critique of EA, Erik Hoel claims that EA is sympathetic towards letting AGI develop because of the potential for billions of happy AIs (~35 mins) . He claims that this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation).
Is this true, or is it a misrepresentation of why EA funding goes towards alignment? For example, perhaps it is because EAs think AGI is inevitable or it is too difficult to delay/prevent?
I can’t speak for the donors, but only trying to prevent AGI doesn’t seem like a good plan. We don’t know what’s required for AGI. It might be easy, so robustly preventing it would likely have a lot of collateral damage (to narrow AI and computing in general). Doing some alignment research is nowhere near as costly, and aligned AI could be useful.
While I am also worried by Will MacAskill’s view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that “this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)”.
Hi everyone,
In this recent critique of EA, Erik Hoel claims that EA is sympathetic towards letting AGI develop because of the potential for billions of happy AIs (~35 mins) . He claims that this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation).
Is this true, or is it a misrepresentation of why EA funding goes towards alignment? For example, perhaps it is because EAs think AGI is inevitable or it is too difficult to delay/prevent?
Thanks very much!
Lucas
Interesting, thanks both!
I can’t speak for the donors, but only trying to prevent AGI doesn’t seem like a good plan. We don’t know what’s required for AGI. It might be easy, so robustly preventing it would likely have a lot of collateral damage (to narrow AI and computing in general). Doing some alignment research is nowhere near as costly, and aligned AI could be useful.
While I am also worried by Will MacAskill’s view as cited by Erik Hoel in the podcast, I think that Erik Hoel does not really give evidence for his claim that “this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation)”.