I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.
My mental model of the situation is different.
People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.
People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.
Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.
(copying-pasting response from LessWrong:)
Good to read your thoughts.
I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.
My mental model of the situation is different.
People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.
People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.
Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.