if AIs can own property and earn income by selling their labor on an open market, then they can simply work a job and use their income to purchase whatever it is they want, without any need to violently “take over the world” to satisfy their goals.
If an individual AI’s relative skill-level is extremely high, then this could simply translate into higher wages for them, obviating the need for them to take part in a violent coup to achieve their objectives.
For example, one can imagine a human hiring a paperclip maximizer AI to perform work, paying them a wage. In return the paperclip maximizer could use their wages to buy more paperclips.
It could be that the AI can achieve much more of their objectives if it takes over (violently or non-violently) than it can achieve by playing by the rules. To use your paperclip example, the AI might think it can get 10^22 paperclips if it takes over the world, but can only achieve 10^18 paperclips with the strategy of making money through legal means and buying paperclips on the open market. In this case, the AI would prefer the takeover plan even if it has only a 10% chance of success.
Also, the objectives of the AI must be designed in such a way that they can be achieved in a legal way. For example, if an AI strongly prefers a higher average temperature of the planet, but the humans put a cap on the global average temperature, then it will be hard to achieve without breaking laws or bribing lawmakers.
There are lots of ways for AIs to have objectives that are shaped in a bad way.
To obtain guarantees that the objectives of the AIs don’t take these bad shapes is still a very difficult thing to do.
It could be that the AI can achieve much more of their objectives if it takes over (violently or non-violently) than it can achieve by playing by the rules.
Sure, that could be true, but I don’t see why it would be true. In the human world, it isn’t true that you can usually get what you want more easily by force. For example, the United States seems better off trading with small nations for their resources than attempting to invade and occupy them, even from a self-interested perspective.
More generally, war is costly, even between entities with very different levels of power. The fact that one entity is very powerful compared to another doesn’t imply that force or coercion is beneficial in expectation; it merely implies that such a strategy is feasible.
It could be that the AI can achieve much more of their objectives if it takes over (violently or non-violently) than it can achieve by playing by the rules. To use your paperclip example, the AI might think it can get 10^22 paperclips if it takes over the world, but can only achieve 10^18 paperclips with the strategy of making money through legal means and buying paperclips on the open market. In this case, the AI would prefer the takeover plan even if it has only a 10% chance of success.
Also, the objectives of the AI must be designed in such a way that they can be achieved in a legal way. For example, if an AI strongly prefers a higher average temperature of the planet, but the humans put a cap on the global average temperature, then it will be hard to achieve without breaking laws or bribing lawmakers.
There are lots of ways for AIs to have objectives that are shaped in a bad way. To obtain guarantees that the objectives of the AIs don’t take these bad shapes is still a very difficult thing to do.
Sure, that could be true, but I don’t see why it would be true. In the human world, it isn’t true that you can usually get what you want more easily by force. For example, the United States seems better off trading with small nations for their resources than attempting to invade and occupy them, even from a self-interested perspective.
More generally, war is costly, even between entities with very different levels of power. The fact that one entity is very powerful compared to another doesn’t imply that force or coercion is beneficial in expectation; it merely implies that such a strategy is feasible.
See here for some earlier discussion of whether violent takeover is likely. (For third parties to view, Matthew was in this discussion.)