I think “time to prepare society for what is coming” is a much more sound argument than “try to stop AI catastrophe”.
I’m still not a fan of the deceleration strategy, because I believe that in any potential future where AGI doesn’t kill us it will bring about a great reduction in human suffering. However, I can definitely appreciate that this is very far from a given and it is not at all unreasonable to believe that the benefits provided by AGI may be significantly or fully offset by the negative impact of removing the need for humans to do stuff!
I think “time to prepare society for what is coming” is a much more sound argument than “try to stop AI catastrophe”.
I’m still not a fan of the deceleration strategy, because I believe that in any potential future where AGI doesn’t kill us it will bring about a great reduction in human suffering. However, I can definitely appreciate that this is very far from a given and it is not at all unreasonable to believe that the benefits provided by AGI may be significantly or fully offset by the negative impact of removing the need for humans to do stuff!