With AI delegates, they would presumably be verifiable and would be programmed to tell the truth and keep to deals
I think that aiming for an equilibrium where that’s true would be good, but I’m not certain that’s the starting point (and if it were otherwise going to scupper getting this off the ground, it probably shouldn’t be the starting point).
So if one person adopts the AI delegate and another doesn’t, then the human can overexaggerate their preferences, withhold information, and even defect on the deal (without blatantly lying), but a verifiable AI delegate presumably wouldn’t be able to do that?
I see no reason why an AI delegate shouldn’t be able to withhold information. I agree that people might want delegates that could do the other things too, but I think that it might be better for the human principal if it couldn’t—it can develop a reputation as trustworthy (in a way that’s hard for an individual human to develop enough of a reputation for because others don’t get enough track record).
I feel like you’re baking a lot into this clause:
I think that aiming for an equilibrium where that’s true would be good, but I’m not certain that’s the starting point (and if it were otherwise going to scupper getting this off the ground, it probably shouldn’t be the starting point).
I see no reason why an AI delegate shouldn’t be able to withhold information. I agree that people might want delegates that could do the other things too, but I think that it might be better for the human principal if it couldn’t—it can develop a reputation as trustworthy (in a way that’s hard for an individual human to develop enough of a reputation for because others don’t get enough track record).