I agree with this. It seems like the world where Moral Circle Expansion is useful is the world where:
The creators of AI are philosophically sophisticated (or persuadable) enough to expand their moral circle if they are exposed to the right arguments or work is put into persuading them.
They are not philosophically sophisticated enough to realize the arguments for expanding the moral circle on their own (seems plausible).
They are not philosophically sophisticated enough to realize that they might want to consider a distribution of arguments that they could have faced and could have persuaded them about what is morally right, and design AI with this in mind (ie CEV), or with the goal of achieving a period of reflection where they can sort out the sort of arguments that they would want to consider.
I think Iād prefer pushing on point 3, as it also encompasses a bunch of other potential philosophical mistakes that AI creators could make.
I agree with this. It seems like the world where Moral Circle Expansion is useful is the world where:
The creators of AI are philosophically sophisticated (or persuadable) enough to expand their moral circle if they are exposed to the right arguments or work is put into persuading them.
They are not philosophically sophisticated enough to realize the arguments for expanding the moral circle on their own (seems plausible).
They are not philosophically sophisticated enough to realize that they might want to consider a distribution of arguments that they could have faced and could have persuaded them about what is morally right, and design AI with this in mind (ie CEV), or with the goal of achieving a period of reflection where they can sort out the sort of arguments that they would want to consider.
I think Iād prefer pushing on point 3, as it also encompasses a bunch of other potential philosophical mistakes that AI creators could make.