Thanks for sharing. In my view, technological progress is more of a mixed bag than universally good—it’s much easier for one person to make a bunch of people suffer than it was hundreds of years ago. Moreover, in many domains, technological progress creates winners and losers even though the net effect may be positive.
Here, for instance, a democratic society that creates advanced AI (not even AGI level) needs to first establish an economic system that will still achieve its goals when the main asset of humans who don’t own AI companies (their labor) drops precipitously in value. Delay provides more time to recognize the need for, and implement, the needed political, social, and economic changes.
I think “time to prepare society for what is coming” is a much more sound argument than “try to stop AI catastrophe”.
I’m still not a fan of the deceleration strategy, because I believe that in any potential future where AGI doesn’t kill us it will bring about a great reduction in human suffering. However, I can definitely appreciate that this is very far from a given and it is not at all unreasonable to believe that the benefits provided by AGI may be significantly or fully offset by the negative impact of removing the need for humans to do stuff!
Thanks for sharing. In my view, technological progress is more of a mixed bag than universally good—it’s much easier for one person to make a bunch of people suffer than it was hundreds of years ago. Moreover, in many domains, technological progress creates winners and losers even though the net effect may be positive.
Here, for instance, a democratic society that creates advanced AI (not even AGI level) needs to first establish an economic system that will still achieve its goals when the main asset of humans who don’t own AI companies (their labor) drops precipitously in value. Delay provides more time to recognize the need for, and implement, the needed political, social, and economic changes.
I think “time to prepare society for what is coming” is a much more sound argument than “try to stop AI catastrophe”.
I’m still not a fan of the deceleration strategy, because I believe that in any potential future where AGI doesn’t kill us it will bring about a great reduction in human suffering. However, I can definitely appreciate that this is very far from a given and it is not at all unreasonable to believe that the benefits provided by AGI may be significantly or fully offset by the negative impact of removing the need for humans to do stuff!