I’m of a split mind on this. On the one hand, I definitely think this is a better way to think about what will determine AI values than “the team of humans that succeeds in building the first AGI”.
But I also think the development of powerful AI is likely to radically reallocate power, potentially towards AI developers. States derive their power from a monopoly on force, and I think there is likely to be a period before the obsolesce of human labor in which these monopolies are upset by whoever is able to most effectively develop and deploy AI capabilities. It’s not clear who this will be, but it hardly seems guaranteed to be existing state powers or property holders, and AI developers have an obvious expertise and first mover advantage.
I’m of a split mind on this. On the one hand, I definitely think this is a better way to think about what will determine AI values than “the team of humans that succeeds in building the first AGI”.
But I also think the development of powerful AI is likely to radically reallocate power, potentially towards AI developers. States derive their power from a monopoly on force, and I think there is likely to be a period before the obsolesce of human labor in which these monopolies are upset by whoever is able to most effectively develop and deploy AI capabilities. It’s not clear who this will be, but it hardly seems guaranteed to be existing state powers or property holders, and AI developers have an obvious expertise and first mover advantage.