I totally sympathize with your sentiment and feel the same way about incorporating other people’s values in a superintelligent AI. If I just went with my own wish list for what the future should look like, I would not care about most other people’s wishes. I feel as though many other people are not even trying to be altruistic in the relevant sense that I want to be altruistic, and I don’t experience a lot of moral motivation to help accomplish people’s weird notions of altruistic goals, let alone any goals that are clearly non-altruistically motivated. In the same way I’d feel no strong (even lower, admittedly) motivation to help make the dreams of baby eating aliens come true.
Having said that, I am confident that it would screw things up for everyone if I followed a decision policy that does not give weight to other people’s strongly held moral beliefs. It is already hard enough to not mess up AI alignment in a way that makes things worse for everyone, and it would become much harder still if we had half a dozen or more competing teams who each wanted to get their idiosyncratic view of the future installed.
BTW note that value differences are not the only thing that can get you into trouble. If you hold an important empirical beliefs that others do not share, and you cannot convince them of it, then it may appear to you as though you’re justified to do something radical about it, but that’s even more likely to be a bad idea because the reasons for taking peer disagreement seriously are stronger in empirical domains of dispute than in normative ones.
There is a sea of considerations from Kantianism, contractualism, norms for stable/civil societies and advanced decision theory that, while each line of argument seems tentative on its own and open to skepticism, all taken together point very strongly into the same direction, namely that things will be horrible if we fail to cooperate with each other and that cooperating is often the truly rational thing to do. You’re probably already familiar with a lot of this, but for general reference, see also this recent paper that makes a particularly interesting case for particularly strong cooperation, as well as other work on the topic, e.g. here and here.
This is why I believe that people interested in any particular version of utilitronium should not override AI alignment procedures last minute just to get an extra large share of cosmic stakes for their own value system, and why I believe that people like me, who care primarily about reducing suffering, should not increase existential risk. Of course, all of this means that people who want to benefit human values in general should take particular caution to make sure that idiosyncratic value systems that may diverge from them also receive consideration and gains from trade.
This piece I wrote recently is relevant to cooperation and the question of whether values are subjective or not, and how much convergence we should expect and to what extent value extrapolation procedures bake in certain (potentially unilateral) assumptions.
I totally sympathize with your sentiment and feel the same way about incorporating other people’s values in a superintelligent AI. If I just went with my own wish list for what the future should look like, I would not care about most other people’s wishes. I feel as though many other people are not even trying to be altruistic in the relevant sense that I want to be altruistic, and I don’t experience a lot of moral motivation to help accomplish people’s weird notions of altruistic goals, let alone any goals that are clearly non-altruistically motivated. In the same way I’d feel no strong (even lower, admittedly) motivation to help make the dreams of baby eating aliens come true.
Having said that, I am confident that it would screw things up for everyone if I followed a decision policy that does not give weight to other people’s strongly held moral beliefs. It is already hard enough to not mess up AI alignment in a way that makes things worse for everyone, and it would become much harder still if we had half a dozen or more competing teams who each wanted to get their idiosyncratic view of the future installed.
BTW note that value differences are not the only thing that can get you into trouble. If you hold an important empirical beliefs that others do not share, and you cannot convince them of it, then it may appear to you as though you’re justified to do something radical about it, but that’s even more likely to be a bad idea because the reasons for taking peer disagreement seriously are stronger in empirical domains of dispute than in normative ones.
There is a sea of considerations from Kantianism, contractualism, norms for stable/civil societies and advanced decision theory that, while each line of argument seems tentative on its own and open to skepticism, all taken together point very strongly into the same direction, namely that things will be horrible if we fail to cooperate with each other and that cooperating is often the truly rational thing to do. You’re probably already familiar with a lot of this, but for general reference, see also this recent paper that makes a particularly interesting case for particularly strong cooperation, as well as other work on the topic, e.g. here and here.
This is why I believe that people interested in any particular version of utilitronium should not override AI alignment procedures last minute just to get an extra large share of cosmic stakes for their own value system, and why I believe that people like me, who care primarily about reducing suffering, should not increase existential risk. Of course, all of this means that people who want to benefit human values in general should take particular caution to make sure that idiosyncratic value systems that may diverge from them also receive consideration and gains from trade.
This piece I wrote recently is relevant to cooperation and the question of whether values are subjective or not, and how much convergence we should expect and to what extent value extrapolation procedures bake in certain (potentially unilateral) assumptions.