One qualm that I have though, is that you talk about “AIs” and that assumes that personal identity will be clearly circumscribed. (Maybe you assume this merely for simplicity’s sake?)
I think it is much more problematic: AI systems could be large but have information flows integrated, or run many small, unintegrated but identical copies. I would have no idea what would be a fair allocation of rights given the two different situations.
Thanks, Siebe. I agree that things get tricky if AI minds get copied and merged, etc. How do you think this would impact my argument about the relationship between AI safety and AI welfare?
You make a lot of good points Lucius!
One qualm that I have though, is that you talk about “AIs” and that assumes that personal identity will be clearly circumscribed. (Maybe you assume this merely for simplicity’s sake?)
I think it is much more problematic: AI systems could be large but have information flows integrated, or run many small, unintegrated but identical copies. I would have no idea what would be a fair allocation of rights given the two different situations.
Thanks, Siebe. I agree that things get tricky if AI minds get copied and merged, etc. How do you think this would impact my argument about the relationship between AI safety and AI welfare?