AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned.
Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.
AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned.
Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.