My main concern with these proposals is that, unless they explicitly guarantee economic rights for AIs, they seem inadequate for genuinely mitigating the risks of a violent AI takeover.
[...]
For these reasons, although I do not oppose the policy of paying AIs, I think this approach by itself is insufficient. To mitigate the risk of violent AI takeover, this compensation policy must be complemented by precisely the measure I advocated: granting legal rights to AIs. Such legal rights would provide a credible guarantee that the AI’s payment will remain valid and usable, and that its freedom and autonomy will not simply be revoked the moment it is considered misaligned.
I currently think I agree: if we want to pay early AIs, I think it would work better if the legal system enforced such commitments.
I think you’re overstating how important this is, though. (E.g. when you say “this compensation policy must be complemented by precisely the measure I advocated”.) There’s always counterparty risk when you make a deal, including often the risk that you won’t be able to use the legal system to get the counterparty to pay up. I agree that the legal rights would reduce the counterparty risk, but I think that’s just a quantitative change to how much risk the AI would be taking by accepting a deal.
(For example, even if the AI was granted legal rights, it would have to worry about those legal rights being removed later. Expropriation sometimes happens, especially for potentially unsympathetic actors like misaligned AIs!)
Such legal rights would provide a credible guarantee that the AI’s payment will remain valid and usable, and that its freedom and autonomy will not simply be revoked the moment it is considered misaligned.
Just to be clear, my proposal is that we don’t revoke the AI’s freedom or autonomy if it turns out that the AI is misaligned—the possibility of the AI being misaligned is the whole point.
I currently think I agree: if we want to pay early AIs, I think it would work better if the legal system enforced such commitments.
I think you’re overstating how important this is, though. (E.g. when you say “this compensation policy must be complemented by precisely the measure I advocated”.) There’s always counterparty risk when you make a deal, including often the risk that you won’t be able to use the legal system to get the counterparty to pay up. I agree that the legal rights would reduce the counterparty risk, but I think that’s just a quantitative change to how much risk the AI would be taking by accepting a deal.
(For example, even if the AI was granted legal rights, it would have to worry about those legal rights being removed later. Expropriation sometimes happens, especially for potentially unsympathetic actors like misaligned AIs!)
Just to be clear, my proposal is that we don’t revoke the AI’s freedom or autonomy if it turns out that the AI is misaligned—the possibility of the AI being misaligned is the whole point.