I can’t speak for the donors, but only trying to prevent AGI doesn’t seem like a good plan. We don’t know what’s required for AGI. It might be easy, so robustly preventing it would likely have a lot of collateral damage (to narrow AI and computing in general). Doing some alignment research is nowhere near as costly, and aligned AI could be useful.
I can’t speak for the donors, but only trying to prevent AGI doesn’t seem like a good plan. We don’t know what’s required for AGI. It might be easy, so robustly preventing it would likely have a lot of collateral damage (to narrow AI and computing in general). Doing some alignment research is nowhere near as costly, and aligned AI could be useful.