the original statement still just seems to imagine that norms will be a non-trivial reason to avoid theft, which seems quite unlikely for a moderately rational agent.
Sorry, I think you’re still conflating two different concepts. I am not claiming:
Social norms will prevent single agents from stealing from others, even in the absence of mechanisms to enforce laws against theft
I am claiming:
Agents will likely not want to establish a collective norm that it’s OK (on a collective level) to expropriate wealth from old, vulnerable individuals. The reason is because most agents will themselves at some point become old, and thus do not want there to be a norm at that time, that would allow their own wealth expropriated from them when they become old.
There are two separate mechanisms at play here. Individual and local instances of theft, like robbery, are typically punished by specific laws. Collective expropriation of groups, while possible in all societies, is usually handled via more decentralized coordination mechanisms, such as social norms.
In other words, if you’re asking me why an AI agent can’t just steal from a human, in my scenario, I’d say that’s because there will (presumably) be laws against theft. But if you’re asking me why the AIs don’t all get up together and steal from the humans collectively, I’d say it’s because they would not want to violate the general norm against expropriation, especially of older, vulnerable groups.
perhaps much of your scenario was trying to convey a different idea from what I see as the straightforward interpretation, but I think it makes it hard for me to productively engage with it, as it feels like engaging with a motte-and-bailey.
For what it’s worth, I asked Claude 3 and GPT-4 to proof-read my essay before I posted, and they both appeared to understand what I said, with almost no misunderstandings, for every single one of my points (from my perspective). I am not bringing this up to claim you are dumb, or anything like that, but I do think it provides evidence that you could probably better understand what I’m saying if you tried to read my words more carefully.
Sorry, I think you’re still conflating two different concepts. I am not claiming:
Social norms will prevent single agents from stealing from others, even in the absence of mechanisms to enforce laws against theft
I am claiming:
Agents will likely not want to establish a collective norm that it’s OK (on a collective level) to expropriate wealth from old, vulnerable individuals. The reason is because most agents will themselves at some point become old, and thus do not want there to be a norm at that time, that would allow their own wealth expropriated from them when they become old.
There are two separate mechanisms at play here. Individual and local instances of theft, like robbery, are typically punished by specific laws. Collective expropriation of groups, while possible in all societies, is usually handled via more decentralized coordination mechanisms, such as social norms.
In other words, if you’re asking me why an AI agent can’t just steal from a human, in my scenario, I’d say that’s because there will (presumably) be laws against theft. But if you’re asking me why the AIs don’t all get up together and steal from the humans collectively, I’d say it’s because they would not want to violate the general norm against expropriation, especially of older, vulnerable groups.
For what it’s worth, I asked Claude 3 and GPT-4 to proof-read my essay before I posted, and they both appeared to understand what I said, with almost no misunderstandings, for every single one of my points (from my perspective). I am not bringing this up to claim you are dumb, or anything like that, but I do think it provides evidence that you could probably better understand what I’m saying if you tried to read my words more carefully.