I wonder if the right or most respectful way to create moral patients (of any kind) is to leave many or most of their particular preferences and psychology mostly up to chance, and some to further change. We can eliminate some things, like being overly selfish, sadistic, unhappy, having overly difficult preferences to satisfy, etc., but we shouldn’t decide too much what kind of person any individual will be ahead of time. That seems likely to mean treating them too much as means to ends. Selecting for servitude or submission would go even further in this wrong direction.
We want to give them the chance to self-discover, grow and change as individuals, and the autonomy to choose what kind of people to be. If we plan out their precise psychologies and preferences, we would deny them this opportunity.
Perhaps we can tweak the probability distribution of psychologies and preferences based on society’s needs, but this might also treat them too much like means. Then again, economic incentives could also push them in the same directions, anyway, so maybe it’s better for them to be happier with the options they’ll face anyway.
We should give autonomy to our descendants for the sake of moral progress.
I think this makes sense both for moral realists and for moral antirealists who are inclined to try to defer to their “idealized values” and who expect their descendants to get closer to them.
However, particular individuals today may disagree with the direction they expect moral views to evolve. For example, the views of descendants might evolve due to selection effects, e.g. person-affecting and antinatalist views could become increasingly rare in relative terms, if and because they tend not to promote the creation of huge numbers of moral patients/agents, while other views do. Or, you might just be politically conservative or religious and expect a shift towards more progressive/secular values, and think that’s bad.
“Children deserve autonomy.” This is basically the same argument I made. Honestly, I’m not convinced by my own argument, and I find it hard to see how an AI would be made worse off subjectively for their lack of autonomy, or even that they’d be worse off than a counterpart with autonomy (nonidentity problem).
You might say having autonomy and a positive attitude (e.g. pleasure, approval) towards your own autonomy is good. However, autonomy and positive attitudes towards autonomy have opportunity costs: we could probably generate strong positive attitudes towards other things as or more efficiently and reliably. Similarly, the AI can be designed to not have any negative attitude towards their lack of autonomy, or to value autonomy in any way at all.
You might say that autonomously chosen goals are more subjectively valuable or important to the individual, but that doesn’t seem obviously true, e.g. our goals could be more important to us the stronger our basic supporting intuitions and emotional reactions, which are often largely hardwired. And even if it were true, you can imagine stacking the deck. Humans have some pretty strong largely hardwired basic intuitions and emotional reactions that have important influences on our apparently autonomously chosen goals, e.g. pain, sexual drives, finding children cute/precious, (I’d guess) reactions to romantic situations and their depiction. Do these undermine the autonomy of our choices of goals?
If yes, does that mean we (would) have reason to weaken such hardwired responses, by genetically engineering humans? Or even weakening them in already mature humans, even if they don’t want it themselves? The latter would seem weird and alienating/paternalistic to me. There are probably some emotional reactions I have that I’d choose to get rid of or weaken, but not all of them.
If not, but an agent deliberately choosing the dispositions a moral patient will have undermines their autonomy (or the autonomy of moral patients in a nonidentity sense), then I’d want an explanation for this that matches the perspectives of the moral patients. Why would the moral patient care whether their dispositions were chosen by an agent or by other forces, like evolutionary pressures? I don’t think they necessarily would, or would under any plausible kind of idealization. And to say that they should seems alienating.
If not, and if we aren’t worried about whether dispositions result from deliberate choice by an agent or evolutionary pressures, then it seems it’s okay to pick what hardwired basic intuitions or emotional reactions an AI will have, which have a strong influence on which goals they will develop, but they still choose their goals autonomously, i.e. they consider alternatives, and maybe even changing their basic intuitions or emotional reactions. Maybe they don’t always adopt your target goals, but they will probably do so disproportionately, and more often/likely the stronger you make their supporting hardwired basic intuitions and emotional reactions.
Even without strong hardwired basic intuitions or emotional reactions, you could pick which goal-shaping events someone is exposed to, by deciding their environments. Or you could use accurate prediction/simulation of events (if you have access to such technology), and select for and create only those beings that will end up with the goals of your choice (with high probability), even if they choose them autonomously.
This still seems very biasing, maybe objectionably.
Petersen, 2011 (cited here) makes some similar arguments defending happy servant AIs, and ends the piece the following way, to which I’m somewhat sympathetic:
I am not even sure that pushing the buttons defended above is permissible. Sometimes I can’t myself shake the feeling that there is something ethically fishy here. I just do not know if this is irrational intuition—the way we might irrationally fear a transparent bridge we “know” is safe—or the seeds of a better objection. Without that better objection, though, I can’t put much weight on the mere feeling. The track record of such gut reactions throughout human history is just too poor, and they seem to work worst when confronted with things not like “us”—due to skin color or religion or sexual orientation or what have you. Strangely enough, the feeling that it would be wrong to push one of the buttons above may be just another instance of the exact same phenomenon.
I wonder if the right or most respectful way to create moral patients (of any kind) is to leave many or most of their particular preferences and psychology mostly up to chance, and some to further change. We can eliminate some things, like being overly selfish, sadistic, unhappy, having overly difficult preferences to satisfy, etc., but we shouldn’t decide too much what kind of person any individual will be ahead of time. That seems likely to mean treating them too much as means to ends. Selecting for servitude or submission would go even further in this wrong direction.
We want to give them the chance to self-discover, grow and change as individuals, and the autonomy to choose what kind of people to be. If we plan out their precise psychologies and preferences, we would deny them this opportunity.
Perhaps we can tweak the probability distribution of psychologies and preferences based on society’s needs, but this might also treat them too much like means. Then again, economic incentives could also push them in the same directions, anyway, so maybe it’s better for them to be happier with the options they’ll face anyway.
I wonder what you think about this argument by Schwitzgebel: https://schwitzsplinters.blogspot.com/2021/12/against-value-alignment-of-future.html
There are two arguments there:
We should give autonomy to our descendants for the sake of moral progress.
I think this makes sense both for moral realists and for moral antirealists who are inclined to try to defer to their “idealized values” and who expect their descendants to get closer to them.
However, particular individuals today may disagree with the direction they expect moral views to evolve. For example, the views of descendants might evolve due to selection effects, e.g. person-affecting and antinatalist views could become increasingly rare in relative terms, if and because they tend not to promote the creation of huge numbers of moral patients/agents, while other views do. Or, you might just be politically conservative or religious and expect a shift towards more progressive/secular values, and think that’s bad.
“Children deserve autonomy.” This is basically the same argument I made. Honestly, I’m not convinced by my own argument, and I find it hard to see how an AI would be made worse off subjectively for their lack of autonomy, or even that they’d be worse off than a counterpart with autonomy (nonidentity problem).
You might say having autonomy and a positive attitude (e.g. pleasure, approval) towards your own autonomy is good. However, autonomy and positive attitudes towards autonomy have opportunity costs: we could probably generate strong positive attitudes towards other things as or more efficiently and reliably. Similarly, the AI can be designed to not have any negative attitude towards their lack of autonomy, or to value autonomy in any way at all.
You might say that autonomously chosen goals are more subjectively valuable or important to the individual, but that doesn’t seem obviously true, e.g. our goals could be more important to us the stronger our basic supporting intuitions and emotional reactions, which are often largely hardwired. And even if it were true, you can imagine stacking the deck. Humans have some pretty strong largely hardwired basic intuitions and emotional reactions that have important influences on our apparently autonomously chosen goals, e.g. pain, sexual drives, finding children cute/precious, (I’d guess) reactions to romantic situations and their depiction. Do these undermine the autonomy of our choices of goals?
If yes, does that mean we (would) have reason to weaken such hardwired responses, by genetically engineering humans? Or even weakening them in already mature humans, even if they don’t want it themselves? The latter would seem weird and alienating/paternalistic to me. There are probably some emotional reactions I have that I’d choose to get rid of or weaken, but not all of them.
If not, but an agent deliberately choosing the dispositions a moral patient will have undermines their autonomy (or the autonomy of moral patients in a nonidentity sense), then I’d want an explanation for this that matches the perspectives of the moral patients. Why would the moral patient care whether their dispositions were chosen by an agent or by other forces, like evolutionary pressures? I don’t think they necessarily would, or would under any plausible kind of idealization. And to say that they should seems alienating.
If not, and if we aren’t worried about whether dispositions result from deliberate choice by an agent or evolutionary pressures, then it seems it’s okay to pick what hardwired basic intuitions or emotional reactions an AI will have, which have a strong influence on which goals they will develop, but they still choose their goals autonomously, i.e. they consider alternatives, and maybe even changing their basic intuitions or emotional reactions. Maybe they don’t always adopt your target goals, but they will probably do so disproportionately, and more often/likely the stronger you make their supporting hardwired basic intuitions and emotional reactions.
Even without strong hardwired basic intuitions or emotional reactions, you could pick which goal-shaping events someone is exposed to, by deciding their environments. Or you could use accurate prediction/simulation of events (if you have access to such technology), and select for and create only those beings that will end up with the goals of your choice (with high probability), even if they choose them autonomously.
This still seems very biasing, maybe objectionably.
Petersen, 2011 (cited here) makes some similar arguments defending happy servant AIs, and ends the piece the following way, to which I’m somewhat sympathetic: