Another approach to thinking about these difficulties could be to take counsel from the Maxwell Demon problem in thermodynamics. There, it looks like you can get a “repugnant conclusion” in which the second law is violated, if you don’t address the details of Maxwell’s demon directly and carefully.
I suspect there is a corresponding gap in analyses of situations at the edges of population ethics. Call it the “repugnant demon.” Meaning, in this hypothetical world full of trillions of people we’re being asked to create, what powers do we have to bestow on our demon which is responsible for enforcing barely livable conditions? These trillions of people want better lives, otherwise by definition they would not be suffering. So the demon must be given the power to prevent them from having those improved lives. How?
Pretty clearly, what we’re actually being asked is whether we want to create a totalitarian autocratic transgalactic prison state with total domination over its population. Is such a society one you wish to create or do you prefer to use the power of your demon which it would take to produce this result in a different way?
A much smaller scale check here is whether it is good to send altruistic donations to existing autocratic rulers or not. Their populations are not committing suicide, so the people must have positive life utility. The dictator can force the population to increase, so the implementation here would be finding dictators who will accept altruistic donations for setting up forced birth camps in their countries.
In other words, I suspect when you finish defining in detail what “repugnant demon” powers need to be created in world-building awful conditions for even very comparatively small populations, it becomes clear immediately where the “missing negative utility” is in these cases: it is that the utilitarian power in producing conditions of very low satisfaction are actually very large. Therefore using that power in the evil act of setting up a totalitarian prison camp instead of a different and morally preferable society is to be condemned.
You could create huge numbers of unlikely to be conscious beings or low moral weight beings who would not suffer at all and only experience pleasure, but each would only be barely above neutral in expected value. These beings may be more efficient to create and use to generate value, because their brains are simpler and more parallelizable.
The value of the far future seems vastly dominated by artificial sentience in expectation. The expected utility-maximizing artificial sentience could be huge numbers of beings with low average expected moral weight.
With current animals and ignoring the far future, it might be invertebrates, and the so-called Rebugnant Conclusion (https://jeffsebodotnet.files.wordpress.com/2021/08/the-rebugnant-conclusion-.pdf ). To be clear, though, I think many invertebrates are not very unlikely to be conscious, and it’s plausible they have similar or incomparable moral weight to humans, rather than much lower moral weight.
A bit repetitive to what I replied below but it isn’t clear to me that minimally -conscious beings can’t suffer (or be made to not be able to suffer).
On relatively more stable ground wrt power to choose between a world optimized for insects vs humans, I’m happy to report I’m a humanity partisan. :-)
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
A bit repetitive to what I replied below but it isn’t clear to me that minimally -conscious beings can’t suffer (or be made to not be able to suffer).
I think it’s very likely that we can stack the deck in favour of positive welfare, even if it’s still close to 0 due to low expected probability of consciousness or low moral weight. There are individual differences in average hedonic setpoints between humans that are influenced genetically and some extreme cases like Jo Cameron. The systems for pleasure and suffering don’t overlap fully, so we could cut parts selectively devoted to suffering out or reduce their sensitivity.
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
I agree with many sophisticated behaviors being well within range of reach for unconscious systems, but it’s not clear this counts that much against invertebrates. You can also come to it from the other side: it’s hard to pick out capacities that humans have that are with very high probability necessary for consciousness (e.g. theory of mind, self-awareness to the level of passing the mirror test and the capacity for verbal report don’t seem necessary), but aren’t present in (some) invertebrates. I’d recommend Rethink Priorities’ work on this topic (disclaimer: I work there, but didn’t work on this, and am not speaking for Rethink Priorities) and Luke Muehlhauser’s report for Open Phil.
Also, at what point would you start to worry about ML (or other AI) systems being conscious, especially ones that aren’t capable of verbal report?
Completely agree it is difficult to find “uniquely human” behaviors that seem indicative of consciousness as animals share so many of them.
Any animals which don’t rear young I am much more likely to believe have behaviors much more genetically determined and so therefore operating at time scales that don’t really satisfy what I think makes sense to call consciousness. I’m thinking of the famous Sphex wasp hacks for instance where complex behavior turns out to be pretty algorithmic and likely not indicative of anything approximating consciousness. Thanks for the pointer to the report!
WRT AI consciousness, I work on ML systems and have a lot of exposure to sophisticated models. My sense is that we are not close to that threshold, even with sophisticated systems that are obviously able to pass naive Turing tests (and have). My sense is we have a really powerful approach to world-model-building with unsupervised noise prediction now, and that current techniques (including RL) are just nowhere near enough to provide the kind of interiority that AI systems need to start me worrying there’s conscious elements there.
IOW, I’m not a “scale is all you need” person—I don’t think current ideas on memory/long-range augmentation or current planning type long-range state modeling is workable. I mean, maybe times 10^100 it is all you need? But that’s just sort of another way of saying it isn’t. :-) The sort of “self-talk” modularity that some LLMs are being experimented with strikes me as the most promising current direction for this (e.g. LAMDA paper) but currently the scale and ingredients are way too small for that to emerge IMO.
I do suspect that building conscious AI will teach us way more about non-verbal-report consciousness. We have some access to these mechanisms with neuroscience experiments but it is difficult going. My belief is we have enough of those to be quite certain many animals share something best called conscious experience.
Is your argument just that there do not exist choices with the logical structure of the repugnant conclusion, and that in any decision-situation that seems like the repugnant conclusion there is always a hidden source of negative utility to balance things out? Or is it more limited to the specific case that Parfit initially discussed?
If the latter, then I think your point is irrelevant to the philosophical questions about the repugnant conclusion, which are concerned only with its logical structure. (Consider Michael’s comment above for some other situations with similar structural features.) But if the former, then you’re going to need some general argument as to why no such structure ever exists, not just an argument as to why the specific case often proposed is a bit misleading. Just because Parfit’s original thought experiment might not pump your intuitions, so long as some set of choices has a structurally similar distribution of utility across time the question still arises. Do you have a more general argument why no choices exist with these distributions?
In other words, the moral weight of the choice we’re asked to make is about the use of power. An example that’s familiar and more successful because the power being exercised is much more clear is the drowning child example. The power here is going into the pond to rescue the child. Should one exercise that power, or are there reasons not to?
The powers bring appealed to in these population ethics scenarios are truly staggering. The question of how they should be used is (in my opinion) usually ignored in favor of treating them as preferences of states of affairs. I suspect this is the reason they end up being confusing—when you instead ask whether setting up forced reproduction camps is a morally acceptable use of the power to craft an arbitrary society, there’s just very little room for moral people to disagree any more.
Relative to creating large numbers of beings with the property of being unable to experience any negative utility but only small amounts of positive utility, it isn’t clear this power exists logically. (The same might be said about enforcing pan galactic totalitarianism but repugnant conclusion effects IMO start being noticeable on scales where we can be quite sure power does exist.)
If the power to create such beings exists, it implies a quite robust power to shape the minds and experiences of created beings. If it were used to prohibit the existence of beings with tremendous capacity for pleasure I think that would be an immoral application. Another scenario though might be the creation of large numbers of minimally-sentient beings who (in some sense) mildly “enjoy” being useful and supportive of such high-experience people. Do toasters and dishwashers and future helpful robots qualify here? It depends what kind of pan psychism ends up being like for hypothetical people with this kind of very advanced mind design power. I could see it being true that such a world is possible, but I think this framing in terms of power exercise removes the repugnance from the situation as well. Is a world of leisure supported by minimally-aware robots repugnant. Nah, not really. :-)
This confuses me. In the original context in which the Repugnant Conclusion was dreamed up (neo-Malthusian debates over population control), seeking a larger population was a kind of laissez-faire approach opposed to “population policy”, while advocates for smaller populations such as Garett Hardin explicitly embraced ‘coercion’. When Parfit originally formulated the Repugnant Conclusion, I don’t think he imagined ‘forced reproductive camps’!
So how about being more specific. Suppose you are living in 1968, and as a matter of fact you know that the claims made in Ehrlich and Ehrlich’s The Population Bomb are true. (This is a hypothetical, as those claims turned out to be false in the actual world.) And you have control over population policies—say, you are president of the United States. If you exercise coercive power over reproduction, you can ensure that the world will have a relatively small population of relatively happy people. If you don’t, and you simply let things be, then the world will have an enormous population of people whose lives are barely worth living.
This case has exactly the same structure as the Repugnant Conclusion, and not by accident: this is exactly the kind of question that Parfit and other population ethicists were thinking about in the 1960s and 1970s. But in this case, the larger population is not produced by the exercise of power; it is the smaller population that would be produced by coercion, and the larger population would be produced through laissez-faire. Thus, your argument about ‘the use of power’ does not support the claim that there do not exist choices with the logical structure of the Repugnant Conclusion.
In general, I think you have confused the Repugnant Conclusion itself with a weirdly specific variant of it, perhaps inspired by versions of the astronomical waste argument.
Thanks! This is a great set of context and a great way to ask for specifics. :-)
I think the situation is like this: I’m hypothetically in a position to exercise a lot of power over reproductive choices—perhaps by backing tax plans which either reward or punish having children. I think what you’re asking is “suppose you know that your plan to offer a child tax credit will result in a miserable population, should you stay with the plan because there’ll be so many miserable people that it’ll be better on utilitarian grounds”? The answer is no, I should not do that. I shouldn’t exercise power I have to make a world which I believe will contain a lot of miserable people.
I think a better power-inversion question is: “suppose you are given dictatorial control of one million miserable and hungry people. Should you slaughter 999,000 of them so the other 1000 can be well fed and happy.” My answer is, again, unsurprisingly, No. No, I shouldn’t use dictatorial power to genocide this unhappy group. Instead I should use it to implement policies I think will lead over time to a sustainable 1000-member happy population, perhaps by the same kind of anti-natalist policies that would in other happier circumstances be abhorrent.
My suspicion I think I share with you: that consequentialism’s advice is imperfect. My sense is it is imperfect mostly not because of unfamiliar galactic-scale reasons or other failures in reacting to odd situations involving unbelievably powerful political forces. If that’s where it broke down it’d be mostly immaterial to considering alternatives to consequentialism in everyday situations (IMO).
Another approach to thinking about these difficulties could be to take counsel from the Maxwell Demon problem in thermodynamics. There, it looks like you can get a “repugnant conclusion” in which the second law is violated, if you don’t address the details of Maxwell’s demon directly and carefully.
I suspect there is a corresponding gap in analyses of situations at the edges of population ethics. Call it the “repugnant demon.” Meaning, in this hypothetical world full of trillions of people we’re being asked to create, what powers do we have to bestow on our demon which is responsible for enforcing barely livable conditions? These trillions of people want better lives, otherwise by definition they would not be suffering. So the demon must be given the power to prevent them from having those improved lives. How?
Pretty clearly, what we’re actually being asked is whether we want to create a totalitarian autocratic transgalactic prison state with total domination over its population. Is such a society one you wish to create or do you prefer to use the power of your demon which it would take to produce this result in a different way?
A much smaller scale check here is whether it is good to send altruistic donations to existing autocratic rulers or not. Their populations are not committing suicide, so the people must have positive life utility. The dictator can force the population to increase, so the implementation here would be finding dictators who will accept altruistic donations for setting up forced birth camps in their countries.
In other words, I suspect when you finish defining in detail what “repugnant demon” powers need to be created in world-building awful conditions for even very comparatively small populations, it becomes clear immediately where the “missing negative utility” is in these cases: it is that the utilitarian power in producing conditions of very low satisfaction are actually very large. Therefore using that power in the evil act of setting up a totalitarian prison camp instead of a different and morally preferable society is to be condemned.
You could create huge numbers of unlikely to be conscious beings or low moral weight beings who would not suffer at all and only experience pleasure, but each would only be barely above neutral in expected value. These beings may be more efficient to create and use to generate value, because their brains are simpler and more parallelizable.
The value of the far future seems vastly dominated by artificial sentience in expectation. The expected utility-maximizing artificial sentience could be huge numbers of beings with low average expected moral weight.
With current animals and ignoring the far future, it might be invertebrates, and the so-called Rebugnant Conclusion (https://jeffsebodotnet.files.wordpress.com/2021/08/the-rebugnant-conclusion-.pdf ). To be clear, though, I think many invertebrates are not very unlikely to be conscious, and it’s plausible they have similar or incomparable moral weight to humans, rather than much lower moral weight.
A bit repetitive to what I replied below but it isn’t clear to me that minimally -conscious beings can’t suffer (or be made to not be able to suffer).
On relatively more stable ground wrt power to choose between a world optimized for insects vs humans, I’m happy to report I’m a humanity partisan. :-)
In theory of mind terms , it sounds like we differ in estimating likelihood insects will be thought of as having conscious experience as we learn more. (Other invertebrates I think the analysis may be very different.) My sense given extraordinary capabilities of really-clearly-not-conscious ML systems is that pretty sophisticated behaviors are well within reach for unconscious organisms. Moreso than I might have thought a few years ago.
I think it’s very likely that we can stack the deck in favour of positive welfare, even if it’s still close to 0 due to low expected probability of consciousness or low moral weight. There are individual differences in average hedonic setpoints between humans that are influenced genetically and some extreme cases like Jo Cameron. The systems for pleasure and suffering don’t overlap fully, so we could cut parts selectively devoted to suffering out or reduce their sensitivity.
I agree with many sophisticated behaviors being well within range of reach for unconscious systems, but it’s not clear this counts that much against invertebrates. You can also come to it from the other side: it’s hard to pick out capacities that humans have that are with very high probability necessary for consciousness (e.g. theory of mind, self-awareness to the level of passing the mirror test and the capacity for verbal report don’t seem necessary), but aren’t present in (some) invertebrates. I’d recommend Rethink Priorities’ work on this topic (disclaimer: I work there, but didn’t work on this, and am not speaking for Rethink Priorities) and Luke Muehlhauser’s report for Open Phil.
Also, at what point would you start to worry about ML (or other AI) systems being conscious, especially ones that aren’t capable of verbal report?
Completely agree it is difficult to find “uniquely human” behaviors that seem indicative of consciousness as animals share so many of them.
Any animals which don’t rear young I am much more likely to believe have behaviors much more genetically determined and so therefore operating at time scales that don’t really satisfy what I think makes sense to call consciousness. I’m thinking of the famous Sphex wasp hacks for instance where complex behavior turns out to be pretty algorithmic and likely not indicative of anything approximating consciousness. Thanks for the pointer to the report!
WRT AI consciousness, I work on ML systems and have a lot of exposure to sophisticated models. My sense is that we are not close to that threshold, even with sophisticated systems that are obviously able to pass naive Turing tests (and have). My sense is we have a really powerful approach to world-model-building with unsupervised noise prediction now, and that current techniques (including RL) are just nowhere near enough to provide the kind of interiority that AI systems need to start me worrying there’s conscious elements there.
IOW, I’m not a “scale is all you need” person—I don’t think current ideas on memory/long-range augmentation or current planning type long-range state modeling is workable. I mean, maybe times 10^100 it is all you need? But that’s just sort of another way of saying it isn’t. :-) The sort of “self-talk” modularity that some LLMs are being experimented with strikes me as the most promising current direction for this (e.g. LAMDA paper) but currently the scale and ingredients are way too small for that to emerge IMO.
I do suspect that building conscious AI will teach us way more about non-verbal-report consciousness. We have some access to these mechanisms with neuroscience experiments but it is difficult going. My belief is we have enough of those to be quite certain many animals share something best called conscious experience.
Very interesting perspective I’ve never thought about.
Is your argument just that there do not exist choices with the logical structure of the repugnant conclusion, and that in any decision-situation that seems like the repugnant conclusion there is always a hidden source of negative utility to balance things out? Or is it more limited to the specific case that Parfit initially discussed?
If the latter, then I think your point is irrelevant to the philosophical questions about the repugnant conclusion, which are concerned only with its logical structure. (Consider Michael’s comment above for some other situations with similar structural features.) But if the former, then you’re going to need some general argument as to why no such structure ever exists, not just an argument as to why the specific case often proposed is a bit misleading. Just because Parfit’s original thought experiment might not pump your intuitions, so long as some set of choices has a structurally similar distribution of utility across time the question still arises. Do you have a more general argument why no choices exist with these distributions?
More like the former.
In other words, the moral weight of the choice we’re asked to make is about the use of power. An example that’s familiar and more successful because the power being exercised is much more clear is the drowning child example. The power here is going into the pond to rescue the child. Should one exercise that power, or are there reasons not to?
The powers bring appealed to in these population ethics scenarios are truly staggering. The question of how they should be used is (in my opinion) usually ignored in favor of treating them as preferences of states of affairs. I suspect this is the reason they end up being confusing—when you instead ask whether setting up forced reproduction camps is a morally acceptable use of the power to craft an arbitrary society, there’s just very little room for moral people to disagree any more.
Relative to creating large numbers of beings with the property of being unable to experience any negative utility but only small amounts of positive utility, it isn’t clear this power exists logically. (The same might be said about enforcing pan galactic totalitarianism but repugnant conclusion effects IMO start being noticeable on scales where we can be quite sure power does exist.)
If the power to create such beings exists, it implies a quite robust power to shape the minds and experiences of created beings. If it were used to prohibit the existence of beings with tremendous capacity for pleasure I think that would be an immoral application. Another scenario though might be the creation of large numbers of minimally-sentient beings who (in some sense) mildly “enjoy” being useful and supportive of such high-experience people. Do toasters and dishwashers and future helpful robots qualify here? It depends what kind of pan psychism ends up being like for hypothetical people with this kind of very advanced mind design power. I could see it being true that such a world is possible, but I think this framing in terms of power exercise removes the repugnance from the situation as well. Is a world of leisure supported by minimally-aware robots repugnant. Nah, not really. :-)
This confuses me. In the original context in which the Repugnant Conclusion was dreamed up (neo-Malthusian debates over population control), seeking a larger population was a kind of laissez-faire approach opposed to “population policy”, while advocates for smaller populations such as Garett Hardin explicitly embraced ‘coercion’. When Parfit originally formulated the Repugnant Conclusion, I don’t think he imagined ‘forced reproductive camps’!
So how about being more specific. Suppose you are living in 1968, and as a matter of fact you know that the claims made in Ehrlich and Ehrlich’s The Population Bomb are true. (This is a hypothetical, as those claims turned out to be false in the actual world.) And you have control over population policies—say, you are president of the United States. If you exercise coercive power over reproduction, you can ensure that the world will have a relatively small population of relatively happy people. If you don’t, and you simply let things be, then the world will have an enormous population of people whose lives are barely worth living.
This case has exactly the same structure as the Repugnant Conclusion, and not by accident: this is exactly the kind of question that Parfit and other population ethicists were thinking about in the 1960s and 1970s. But in this case, the larger population is not produced by the exercise of power; it is the smaller population that would be produced by coercion, and the larger population would be produced through laissez-faire. Thus, your argument about ‘the use of power’ does not support the claim that there do not exist choices with the logical structure of the Repugnant Conclusion.
In general, I think you have confused the Repugnant Conclusion itself with a weirdly specific variant of it, perhaps inspired by versions of the astronomical waste argument.
Thanks! This is a great set of context and a great way to ask for specifics. :-)
I think the situation is like this: I’m hypothetically in a position to exercise a lot of power over reproductive choices—perhaps by backing tax plans which either reward or punish having children. I think what you’re asking is “suppose you know that your plan to offer a child tax credit will result in a miserable population, should you stay with the plan because there’ll be so many miserable people that it’ll be better on utilitarian grounds”? The answer is no, I should not do that. I shouldn’t exercise power I have to make a world which I believe will contain a lot of miserable people.
I think a better power-inversion question is: “suppose you are given dictatorial control of one million miserable and hungry people. Should you slaughter 999,000 of them so the other 1000 can be well fed and happy.” My answer is, again, unsurprisingly, No. No, I shouldn’t use dictatorial power to genocide this unhappy group. Instead I should use it to implement policies I think will lead over time to a sustainable 1000-member happy population, perhaps by the same kind of anti-natalist policies that would in other happier circumstances be abhorrent.
My suspicion I think I share with you: that consequentialism’s advice is imperfect. My sense is it is imperfect mostly not because of unfamiliar galactic-scale reasons or other failures in reacting to odd situations involving unbelievably powerful political forces. If that’s where it broke down it’d be mostly immaterial to considering alternatives to consequentialism in everyday situations (IMO).