There is also this—the Future Claimant’s Representative—it is apparently a phrase from US bankruptcy/tort law that has been applied in an environmental and museumology context by Ian Baucom, a US academic. This is probably out of context for your question, but I’m interested in fleshing out: what would a FCR that represents interests of future generations of AIs that are more likely to enter the moral circle (i.e. when we turn off a GPT-n or make big changes to an advanced/human-level AI, are we doing something we wouldn’t be happy doing to a human or other entity that is (possibly/roughly) morally equivalent) look like? I think Bostrom might have mentioned something like this in one of his digital minds papers.
If anyone has any thoughts/wants to work on this with me, please get in touch (I’m thinking of it as a video and/or paper/essay).
There is also this—the Future Claimant’s Representative—it is apparently a phrase from US bankruptcy/tort law that has been applied in an environmental and museumology context by Ian Baucom, a US academic. This is probably out of context for your question, but I’m interested in fleshing out: what would a FCR that represents interests of future generations of AIs that are more likely to enter the moral circle (i.e. when we turn off a GPT-n or make big changes to an advanced/human-level AI, are we doing something we wouldn’t be happy doing to a human or other entity that is (possibly/roughly) morally equivalent) look like? I think Bostrom might have mentioned something like this in one of his digital minds papers.
If anyone has any thoughts/wants to work on this with me, please get in touch (I’m thinking of it as a video and/or paper/essay).