Yes, for the reasons you mention, critical-level utilitarianism has some unintuitive conclusions. I would go further and say that if your ethical theory tells you to reduce the existence of people who are happy but less so than average, then you should question your ethical theory. More generally, it leads to the ‘sadistic conclusion’: that there are worlds where you should prefer X over Y even though everyone in X is suffering and everyone in Y has positive lives.
In his introductory to population ethics, Ben West shows the consequences of various positive and negative critical levels here (see ‘in tabular form’), showing that all will give undesired consequences (as previously shown by Arrhenius).
He concludes:
In this post, we investigated modifying f,g and c. However, we saw that having c be anything but zero leads to a “sadistic conclusion”, and having f be non-constant leads to the “Separated Worlds” problem, meaning that we conclude V must be of the form
V(u)=∑g(u)
Where g is a continuous, monotonically increasing function. This is basically classical (or total) utilitarianism, with perhaps some inequality aversion.
Further to this, there are considerations relating to the fact that most people have complex values and that it’s difficult to argue that your one true ethical theory should override others’ values, as Yudkowsky has from time to time pointed out. And it’s not as easy as saying that you think your theory weights all people equally. Because there are lots of people who think that, and if your theory wins automatically over the others, then you’re giving yourself a privileged meta-ethical position. If you take a stance where others’ values count, then you would want to take into account a wide range of secular views that life should be valued, and that people should be treated with some dignity even if they are unhappy and so on. In that light, discussing the more controversial implications of one specific ethical theory would probably be counter-productive.
Moreover, there are considerations relating to moral uncertainty and compromise that also undermine the approach of ‘reasoning to extreme conclusions from one ethical theory’. If you’re unsure about which ethical theory is correct, then it’s better to take actions that satisfy many ethical theories, rather than just one. Moreover, moral compromise or moral trade would often force you to do so anyway. Because if you were funding condoms for the poor where someone else was funding malaria nets (which suppose, for the purpose of example, increase population), then you could both switch to something with a neutral effect on population with a larger welfare improvement, like deworming (which prevents illness moreso than death). In this sense, moral trade forces moral compromise, which causes a good end state of affairs. It’s similar to if everyone had moral uncertainty or gave appropriate credence to others moral positions in the first place.
I hope this feedback gives you some useful directions to pursue in thinking about these questions.
In that light, discussing the more controversial implications of one specific ethical theory would probably be counter-productive.
My experience with abstract population ethics discussions, at least on online forums, is that they generate more heat than light; they rarely tell you what you should do differently, or even how to change your decision-making process. I tried to strike a balance between making strong ethical assumptions, which few would agree with, and being too abstract, which wouldn’t be practically useful.
So I didn’t try to justify my ethical assumptions, and just did calculations with the theory I thought the most people would agree with. Using a critical level is compatible with theories that don’t use any “person-affecting principle” and value improvements to future people’s lives just as much as improvements to current people’s lives.
I chose to use critical-level utilitarianism because it’s very general; total utilitarianism, number-dampened utilitarianism, and average utilitarianism are all equivalent for small changes if you use the right critical level. If population or average well-being change too much they’re not equivalent, but as an individual effective altruist you can’t change the world enough for that to matter—even a 1% change in global population or GDP is a huge number.
If you’re unsure about which ethical theory is correct, then it’s better to take actions that satisfy many ethical theories, rather than just one. Moreover, moral compromise or moral trade would often force you to do so anyway.
Actually, I think that facilitating moral trade is a major advantage of using a “critical level”. The “utility-maximizing” compromise between two altruists who disagree on the critical level is to take the average of the levels. (However, this requires altruists to be sincere about their preferences, and to agree on everything else except population ethics.) The two altruists don’t even have to be critical-level utilitarians for this to work. For example, someone with a total view (critical level ~= $300) and someone with an average view (effective critical level ~= $3,000) could agree to use a critical level of $1,000 for their donation decisions.
I hope this feedback gives you some useful directions to pursue in thinking about these questions.
Thanks. Sorry if this post was too aggressive—I didn’t mean to strongly advocate a single theory; I just wanted to discuss something other than “which theory is correct?”, and this topic seemed like the best choice. I’d like to write more articles about how population changes can be evaluated, so I’m wondering where to go next. Here’s what I can think of:
Don’t bother: There’s not enough agreement on population ethics for it to be worth discussing at all.
Be more abstract: Talking about population-ethical theories is more productive than I think it is, and I should focus on exploring a broader range of theories. I could write a post introducing the basic concepts and explain how they’re relevant to effective altruism.
Using a different theory: The most popular theory (which one?) isn’t compatible with a critical level, so I should focus on implications of that one.
Change my assumptions slightly: I could estimate individual’s utility differently (like use HDI instead of GDP), or value population changes somewhat differently.
Continue with this: Go into more detail, do more realistic calculations, and justify my assumptions better.
Which option do you think is the best? I want to write about whatever people are most interested in, as long as it’s related, since I think this is what I’m most knowledgeable about. But I don’t have a great idea of what that is, so any suggestions would be greatly appreciated!
Calling this “critical level utilitarianism” opens you to concerns raised by Ryan (which I share) and doesn’t seem to buy you anything.
Just say that $300 is the point at which life is worth living (i.e. that’s the point at which utility is zero). Then you don’t run into weird crap like “someone making $200 per year has a life that’s worth living for them, but makes society a worse place.”
(I would call this point the “neutral level” if you’re looking for terminology.)
I chose to use critical-level utilitarianism because it’s very general; total utilitarianism, number-dampened utilitarianism, and average utilitarianism are all equivalent for small changes if you use the right critical level.
Yes, it’s some consolation that CLU is a family of standards that includes standard TU.
Actually, I think that facilitating moral trade is a major advantage of using a “critical level”. The “utility-maximizing” compromise between two altruists who disagree on the critical level is to take the average of the levels.
Yep, the concern here is that we would get into that situation in the first place. I’m interested in finding ways to avoid having some optimistic altruists strong advocating for greatly increased population and pessimists advocating for greatly decreased population in the first place. Of course, not discussing population is not an entirely satisfactory solution so perhaps we should just try to emphasise harmony more.
Thanks. Sorry if this post was too aggressive—I didn’t mean to strongly advocate a single theory; I just wanted to discuss something other than “which theory is correct?”, and this topic seemed like the best choice. I’d like to write more articles about how population changes can be evaluated, so I’m wondering where to go next. Here’s what I can think of:
Don’t bother: There’s not enough agreement on population ethics for it to be worth discussing at all.
Be more abstract: Talking about population-ethical theories is more productive than I think it is, and I should focus on exploring a broader range of theories. I could write a post introducing the basic concepts and explain how they’re relevant to effective altruism.
Using a different theory: The most popular theory (which one?) isn’t compatible with a critical level, so I should focus on implications of that one.
Change my assumptions slightly: I could estimate individual’s utility differently (like use HDI instead of GDP), or value population changes somewhat differently.
Continue with this: Go into more detail, do more realistic calculations, and justify my assumptions better.
I don’t think you fell into a trap of appearing aggressive or being a narrow-minded advocate of one theory. It was also a good post and had some interesting conclusions. I just think there are some traps in nearby areas, relating to upsetting altruists who have more optimistic or pessimist views, or being divisive, which—if you want—you can ward against by privately circulating any potentially controversial draft. From your options, 1,4 and 5 seem fine. 3 seems especially useful. Rather than take this approach: “Throughout this article, I’ll assume you agree with Broome’s article”, it might be better to try to perform at least some small amount of synthesis of different views. For instance, some standard (and useful) caveats for a CLU analysis would be that:
CLU has some unintuitive conclusions (sadistic conclusion)
people’s preferences for living or procreating may give reasons for letting people do these things
prioritarianism or Rawlsian justice based views may encourage you to care more about improving the lives of those who are worst-off
Someone pointed out to me that long-term considerations dominate population ethics. So even if one places intrinsic value on population changes, the calculation might be dominated by how these changes affect the survival of humanity. Population increases may destabilize humanity due to competition for scarce resources. On the other hand, they may decrease the probability that every last human will die.
That might be true of short-term population ethics, but conversely, in the long run most of the value of colonizing the galaxy (etc.) will come from the higher population it could support—unless you’re an average-utilitarian, in which case it is much less valuable.
Thanks for this, it’s interesting to see the numbers worked out.
That said, I have concerns about its relevance, particularly in producing a variable we might want to target. It’s applying population axiologies which are supposed to be about complete population-histories to local populations at a given time. If you make all the value assumptions you need (plus some about personal utility being a sum over utilities at different times) then this does track the amount of aggregate utility being directly contributed by different countries at different moments. But this ignores externalities which are incurred either by other countries, or the future, or both. There are a lot of different indirect effects of changes in population size, and if you start trying to account for these then you could get sensible practical rules which look quite different from critical-level utilitarianism.
It’s applying population axiologies which are supposed to be about complete population-histories to local populations at a given time.
You’re right, my calculations only work for the full population history if population and GDP stay the same forever for each country. The main element of uncertainty is future per capita GDP, because population changes more slowly and is easier to predict. In general, since GDP per capita is likely to be higher in the future, I’m probably rating population increases too unfavorably.
But this ignores externalities which are incurred either by other countries, or the future, or both. There are a lot of different indirect effects of changes in population size, and if you start trying to account for these then you could get sensible practical rules which look quite different from critical-level utilitarianism.
Whether externalities are a major issue depends on your choice of critical level. If a country is very far from the critical level, externalities aren’t a major factor in my model (obviously if you aren’t a critical-level utilitarian they may be more important). For example, if the critical level is $300, adding an extra 1% of people with incomes of $30,000 is better than increasing everyone’s income by 4%, according to critical-level utilitarianism, because ln(30000/300) > 4. Any externalities would have to be very large to be the dominant factor here. On the other hand, if the critical level if $3,000 and the potential income is $6,000, externalities could be a much bigger deal.
I think it is plausible that externalities could be very large—and that the main mechanism for this may be changing the number of expected lives in the future. See for example discussion in section 1.1.2 of Nick Beckstead’s PhD thesis.
I think this is a cool idea. Owen and others pointed out that this is overly simplistic, but I think it can serve as a useful prod.
One thing which I think would be interesting is to tie this to specific policy choices. I’m not sure how top-rated charities affect population size, but if there’s a trade-off between quantity and quality that’s a useful thing to think about.
Yes, for the reasons you mention, critical-level utilitarianism has some unintuitive conclusions. I would go further and say that if your ethical theory tells you to reduce the existence of people who are happy but less so than average, then you should question your ethical theory. More generally, it leads to the ‘sadistic conclusion’: that there are worlds where you should prefer X over Y even though everyone in X is suffering and everyone in Y has positive lives.
In his introductory to population ethics, Ben West shows the consequences of various positive and negative critical levels here (see ‘in tabular form’), showing that all will give undesired consequences (as previously shown by Arrhenius).
He concludes:
Further to this, there are considerations relating to the fact that most people have complex values and that it’s difficult to argue that your one true ethical theory should override others’ values, as Yudkowsky has from time to time pointed out. And it’s not as easy as saying that you think your theory weights all people equally. Because there are lots of people who think that, and if your theory wins automatically over the others, then you’re giving yourself a privileged meta-ethical position. If you take a stance where others’ values count, then you would want to take into account a wide range of secular views that life should be valued, and that people should be treated with some dignity even if they are unhappy and so on. In that light, discussing the more controversial implications of one specific ethical theory would probably be counter-productive.
Moreover, there are considerations relating to moral uncertainty and compromise that also undermine the approach of ‘reasoning to extreme conclusions from one ethical theory’. If you’re unsure about which ethical theory is correct, then it’s better to take actions that satisfy many ethical theories, rather than just one. Moreover, moral compromise or moral trade would often force you to do so anyway. Because if you were funding condoms for the poor where someone else was funding malaria nets (which suppose, for the purpose of example, increase population), then you could both switch to something with a neutral effect on population with a larger welfare improvement, like deworming (which prevents illness moreso than death). In this sense, moral trade forces moral compromise, which causes a good end state of affairs. It’s similar to if everyone had moral uncertainty or gave appropriate credence to others moral positions in the first place.
I hope this feedback gives you some useful directions to pursue in thinking about these questions.
My experience with abstract population ethics discussions, at least on online forums, is that they generate more heat than light; they rarely tell you what you should do differently, or even how to change your decision-making process. I tried to strike a balance between making strong ethical assumptions, which few would agree with, and being too abstract, which wouldn’t be practically useful.
So I didn’t try to justify my ethical assumptions, and just did calculations with the theory I thought the most people would agree with. Using a critical level is compatible with theories that don’t use any “person-affecting principle” and value improvements to future people’s lives just as much as improvements to current people’s lives.
I chose to use critical-level utilitarianism because it’s very general; total utilitarianism, number-dampened utilitarianism, and average utilitarianism are all equivalent for small changes if you use the right critical level. If population or average well-being change too much they’re not equivalent, but as an individual effective altruist you can’t change the world enough for that to matter—even a 1% change in global population or GDP is a huge number.
Actually, I think that facilitating moral trade is a major advantage of using a “critical level”. The “utility-maximizing” compromise between two altruists who disagree on the critical level is to take the average of the levels. (However, this requires altruists to be sincere about their preferences, and to agree on everything else except population ethics.) The two altruists don’t even have to be critical-level utilitarians for this to work. For example, someone with a total view (critical level ~= $300) and someone with an average view (effective critical level ~= $3,000) could agree to use a critical level of $1,000 for their donation decisions.
Thanks. Sorry if this post was too aggressive—I didn’t mean to strongly advocate a single theory; I just wanted to discuss something other than “which theory is correct?”, and this topic seemed like the best choice. I’d like to write more articles about how population changes can be evaluated, so I’m wondering where to go next. Here’s what I can think of:
Don’t bother: There’s not enough agreement on population ethics for it to be worth discussing at all.
Be more abstract: Talking about population-ethical theories is more productive than I think it is, and I should focus on exploring a broader range of theories. I could write a post introducing the basic concepts and explain how they’re relevant to effective altruism.
Using a different theory: The most popular theory (which one?) isn’t compatible with a critical level, so I should focus on implications of that one.
Change my assumptions slightly: I could estimate individual’s utility differently (like use HDI instead of GDP), or value population changes somewhat differently.
Continue with this: Go into more detail, do more realistic calculations, and justify my assumptions better.
Which option do you think is the best? I want to write about whatever people are most interested in, as long as it’s related, since I think this is what I’m most knowledgeable about. But I don’t have a great idea of what that is, so any suggestions would be greatly appreciated!
Calling this “critical level utilitarianism” opens you to concerns raised by Ryan (which I share) and doesn’t seem to buy you anything.
Just say that $300 is the point at which life is worth living (i.e. that’s the point at which utility is zero). Then you don’t run into weird crap like “someone making $200 per year has a life that’s worth living for them, but makes society a worse place.”
(I would call this point the “neutral level” if you’re looking for terminology.)
Yes, it’s some consolation that CLU is a family of standards that includes standard TU.
Yep, the concern here is that we would get into that situation in the first place. I’m interested in finding ways to avoid having some optimistic altruists strong advocating for greatly increased population and pessimists advocating for greatly decreased population in the first place. Of course, not discussing population is not an entirely satisfactory solution so perhaps we should just try to emphasise harmony more.
I don’t think you fell into a trap of appearing aggressive or being a narrow-minded advocate of one theory. It was also a good post and had some interesting conclusions. I just think there are some traps in nearby areas, relating to upsetting altruists who have more optimistic or pessimist views, or being divisive, which—if you want—you can ward against by privately circulating any potentially controversial draft. From your options, 1,4 and 5 seem fine. 3 seems especially useful. Rather than take this approach: “Throughout this article, I’ll assume you agree with Broome’s article”, it might be better to try to perform at least some small amount of synthesis of different views. For instance, some standard (and useful) caveats for a CLU analysis would be that:
CLU has some unintuitive conclusions (sadistic conclusion)
people’s preferences for living or procreating may give reasons for letting people do these things
prioritarianism or Rawlsian justice based views may encourage you to care more about improving the lives of those who are worst-off
Someone pointed out to me that long-term considerations dominate population ethics. So even if one places intrinsic value on population changes, the calculation might be dominated by how these changes affect the survival of humanity. Population increases may destabilize humanity due to competition for scarce resources. On the other hand, they may decrease the probability that every last human will die.
That might be true of short-term population ethics, but conversely, in the long run most of the value of colonizing the galaxy (etc.) will come from the higher population it could support—unless you’re an average-utilitarian, in which case it is much less valuable.
That’s what I was saying. The potential long-term population outweighs the effects of short-term population.
Thanks for this, it’s interesting to see the numbers worked out.
That said, I have concerns about its relevance, particularly in producing a variable we might want to target. It’s applying population axiologies which are supposed to be about complete population-histories to local populations at a given time. If you make all the value assumptions you need (plus some about personal utility being a sum over utilities at different times) then this does track the amount of aggregate utility being directly contributed by different countries at different moments. But this ignores externalities which are incurred either by other countries, or the future, or both. There are a lot of different indirect effects of changes in population size, and if you start trying to account for these then you could get sensible practical rules which look quite different from critical-level utilitarianism.
Thanks for the feedback!
You’re right, my calculations only work for the full population history if population and GDP stay the same forever for each country. The main element of uncertainty is future per capita GDP, because population changes more slowly and is easier to predict. In general, since GDP per capita is likely to be higher in the future, I’m probably rating population increases too unfavorably.
Whether externalities are a major issue depends on your choice of critical level. If a country is very far from the critical level, externalities aren’t a major factor in my model (obviously if you aren’t a critical-level utilitarian they may be more important). For example, if the critical level is $300, adding an extra 1% of people with incomes of $30,000 is better than increasing everyone’s income by 4%, according to critical-level utilitarianism, because ln(30000/300) > 4. Any externalities would have to be very large to be the dominant factor here. On the other hand, if the critical level if $3,000 and the potential income is $6,000, externalities could be a much bigger deal.
I think it is plausible that externalities could be very large—and that the main mechanism for this may be changing the number of expected lives in the future. See for example discussion in section 1.1.2 of Nick Beckstead’s PhD thesis.
Compare Rabinowicz’s view. Having a unique worldwide neutral level seems bizarre. Isn’t accepting greedy neutrality better?
I think this is a cool idea. Owen and others pointed out that this is overly simplistic, but I think it can serve as a useful prod.
One thing which I think would be interesting is to tie this to specific policy choices. I’m not sure how top-rated charities affect population size, but if there’s a trade-off between quantity and quality that’s a useful thing to think about.