I think that this may be the case, but I would be much more cautious about trying to regulate AI development. I’d start with baby steps that mostly won’t cost too much or provoke backlash, like interpretability research.
My model of the situation is:
People are more or less rational, that is we shouldn’t expect deviations from rational agent models.
People are mostly selfish, with altruism being essentially signalling, which has little value here.
AI has enough of a chance to bring vastly positive changes on par with a singularity that it dominates other considerations.
In other words, even if there was a 1% chance of a singularity, it would have enough impact that even believing in high AI risk is insufficient to get the population on your side.
This is why I do not think the post is correct, in a nutshell, and that I think the AI governance/digital democracy/privacy movements are way overestimating what costs can be imposed on AI companies (Also known as alignment taxes).
I think AI governance could be surprisingly useful. But attempts to slow things down significantly are mostly unrealistic for the time being.
I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.
My mental model of the situation is different.
People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.
People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.
Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.
I think that this may be the case, but I would be much more cautious about trying to regulate AI development. I’d start with baby steps that mostly won’t cost too much or provoke backlash, like interpretability research.
My model of the situation is:
People are more or less rational, that is we shouldn’t expect deviations from rational agent models.
People are mostly selfish, with altruism being essentially signalling, which has little value here.
AI has enough of a chance to bring vastly positive changes on par with a singularity that it dominates other considerations.
In other words, even if there was a 1% chance of a singularity, it would have enough impact that even believing in high AI risk is insufficient to get the population on your side.
This is why I do not think the post is correct, in a nutshell, and that I think the AI governance/digital democracy/privacy movements are way overestimating what costs can be imposed on AI companies (Also known as alignment taxes).
I think AI governance could be surprisingly useful. But attempts to slow things down significantly are mostly unrealistic for the time being.
(copying-pasting response from LessWrong:)
Good to read your thoughts.
I would agree that slowing further AI capability generalisation developments down by more than half in the next years is highly improbable. Got to work with what we have.
My mental model of the situation is different.
People engage in positively reinforcing dynamics around social prestige and market profit, even if what they are doing is net bad for what they care about over the long run.
People are mostly egocentric, and have difficulty connecting and relating, particularly in the current individualistic social signalling and “divide and conquer” market environment.
Scaling up deployable capabilities of AI has enough of a chance to reap extractive benefits for narcissistic/psychopathic tech leader types, that they will go ahead with it, while sowing the world with techno-optimistic visions that suit their strategy. That is, even though general AI will (cannot not) lead to wholesale destruction of everything we care about in the society and larger environment we’re part of.