Thanks for calling this out, whoever you are. This is another reason why, for the last ~5 years or so, I’ve been pushing for the terminology “AI x-risk” and “AI x-safety” to replace “(long-term) AI risk” and “(long-term) AI safety”. For audiences not familiar with the “x” or “existential” terminology, one can say “large scale risk” to point the large stakes, rather than always saying “long term”.
(Also, the fact that I don’t know who you are is actually fairly heartening :)
I’m very pleased to see this line of reasoning being promoted. Mutual transparency of agents (information permeability of boundaries) is a really important feature of & input to real-world ethics; thanks for expounding on it!