Hello :) - apologies and provisos first, I admit that I haven’t read Katja’s post so what I will say may be already covered by her. I don’t know if this is relevant, but I feel (stress, feel!) that a qualitative difference between states and corporations is that the former are (or ought to be, at least) accountable to their citizens (if not in general all citizens of the world in a weaker sense) and their function is the wellbeing and protection of their citizens whereas corporations are only accountable to their stakeholders and their primary function may be to get as rich as possible. So, the motivation to activate AI (here, I’m a bit ignorant, I don’t know if there’s such a thing as an activate or a kill switch that would prevent AI from becoming fully autonomous or surpass humans) will be different for governments and firms, and in the former case governments, same as with nuclear weapons, may decide to keep AI as a form of deterrence and not activated whereas corporations may not.
I really hope this is pertinent and helpful and not too ignorant!
Thanks for your thoughts. I agree that corporations and governments are pretty different, and their “motivations” are one major way in which they differ. I think you could dive deeply into these differences and how they affect the analogy between large human organisations and super intelligent machines, but I think that leads to a much longer piece. My aim was just to say that, if you’re trying to learn from this analogy, you should consider both governments and corporations.
I don’t know if this helps to explain my thinking but imagine you made contact with a sister Earth where there were no organisations larger than family groups. Some people asked you about forming larger organisations—they expected productivity benefits, but some were worried about global catastrophic risks that large human organisations might pose. I’m saying it would be a mistake to advise these people based on our experience with corporations alone, and we should also tell them about our experiences with governments.
(The example is a bit silly, obviously, but I hope it illustrates the kind of question I’m addressing)
Aaah ok, that helps a lot! Plus I think I had misread your (and Katja’s piece, at least the summary) originally, now that I’ve had a little more sleep I think I understand a bit better what you’re onto!
Hello :) - apologies and provisos first, I admit that I haven’t read Katja’s post so what I will say may be already covered by her. I don’t know if this is relevant, but I feel (stress, feel!) that a qualitative difference between states and corporations is that the former are (or ought to be, at least) accountable to their citizens (if not in general all citizens of the world in a weaker sense) and their function is the wellbeing and protection of their citizens whereas corporations are only accountable to their stakeholders and their primary function may be to get as rich as possible. So, the motivation to activate AI (here, I’m a bit ignorant, I don’t know if there’s such a thing as an activate or a kill switch that would prevent AI from becoming fully autonomous or surpass humans) will be different for governments and firms, and in the former case governments, same as with nuclear weapons, may decide to keep AI as a form of deterrence and not activated whereas corporations may not.
I really hope this is pertinent and helpful and not too ignorant!
Best Wishes,
Haris
Thanks for your thoughts. I agree that corporations and governments are pretty different, and their “motivations” are one major way in which they differ. I think you could dive deeply into these differences and how they affect the analogy between large human organisations and super intelligent machines, but I think that leads to a much longer piece. My aim was just to say that, if you’re trying to learn from this analogy, you should consider both governments and corporations.
I don’t know if this helps to explain my thinking but imagine you made contact with a sister Earth where there were no organisations larger than family groups. Some people asked you about forming larger organisations—they expected productivity benefits, but some were worried about global catastrophic risks that large human organisations might pose. I’m saying it would be a mistake to advise these people based on our experience with corporations alone, and we should also tell them about our experiences with governments.
(The example is a bit silly, obviously, but I hope it illustrates the kind of question I’m addressing)
Aaah ok, that helps a lot!
Plus I think I had misread your (and Katja’s piece, at least the summary) originally, now that I’ve had a little more sleep I think I understand a bit better what you’re onto!
Best Wishes,
Haris