In that light, discussing the more controversial implications of one specific ethical theory would probably be counter-productive.
My experience with abstract population ethics discussions, at least on online forums, is that they generate more heat than light; they rarely tell you what you should do differently, or even how to change your decision-making process. I tried to strike a balance between making strong ethical assumptions, which few would agree with, and being too abstract, which wouldn’t be practically useful.
So I didn’t try to justify my ethical assumptions, and just did calculations with the theory I thought the most people would agree with. Using a critical level is compatible with theories that don’t use any “person-affecting principle” and value improvements to future people’s lives just as much as improvements to current people’s lives.
I chose to use critical-level utilitarianism because it’s very general; total utilitarianism, number-dampened utilitarianism, and average utilitarianism are all equivalent for small changes if you use the right critical level. If population or average well-being change too much they’re not equivalent, but as an individual effective altruist you can’t change the world enough for that to matter—even a 1% change in global population or GDP is a huge number.
If you’re unsure about which ethical theory is correct, then it’s better to take actions that satisfy many ethical theories, rather than just one. Moreover, moral compromise or moral trade would often force you to do so anyway.
Actually, I think that facilitating moral trade is a major advantage of using a “critical level”. The “utility-maximizing” compromise between two altruists who disagree on the critical level is to take the average of the levels. (However, this requires altruists to be sincere about their preferences, and to agree on everything else except population ethics.) The two altruists don’t even have to be critical-level utilitarians for this to work. For example, someone with a total view (critical level ~= $300) and someone with an average view (effective critical level ~= $3,000) could agree to use a critical level of $1,000 for their donation decisions.
I hope this feedback gives you some useful directions to pursue in thinking about these questions.
Thanks. Sorry if this post was too aggressive—I didn’t mean to strongly advocate a single theory; I just wanted to discuss something other than “which theory is correct?”, and this topic seemed like the best choice. I’d like to write more articles about how population changes can be evaluated, so I’m wondering where to go next. Here’s what I can think of:
Don’t bother: There’s not enough agreement on population ethics for it to be worth discussing at all.
Be more abstract: Talking about population-ethical theories is more productive than I think it is, and I should focus on exploring a broader range of theories. I could write a post introducing the basic concepts and explain how they’re relevant to effective altruism.
Using a different theory: The most popular theory (which one?) isn’t compatible with a critical level, so I should focus on implications of that one.
Change my assumptions slightly: I could estimate individual’s utility differently (like use HDI instead of GDP), or value population changes somewhat differently.
Continue with this: Go into more detail, do more realistic calculations, and justify my assumptions better.
Which option do you think is the best? I want to write about whatever people are most interested in, as long as it’s related, since I think this is what I’m most knowledgeable about. But I don’t have a great idea of what that is, so any suggestions would be greatly appreciated!
Calling this “critical level utilitarianism” opens you to concerns raised by Ryan (which I share) and doesn’t seem to buy you anything.
Just say that $300 is the point at which life is worth living (i.e. that’s the point at which utility is zero). Then you don’t run into weird crap like “someone making $200 per year has a life that’s worth living for them, but makes society a worse place.”
(I would call this point the “neutral level” if you’re looking for terminology.)
I chose to use critical-level utilitarianism because it’s very general; total utilitarianism, number-dampened utilitarianism, and average utilitarianism are all equivalent for small changes if you use the right critical level.
Yes, it’s some consolation that CLU is a family of standards that includes standard TU.
Actually, I think that facilitating moral trade is a major advantage of using a “critical level”. The “utility-maximizing” compromise between two altruists who disagree on the critical level is to take the average of the levels.
Yep, the concern here is that we would get into that situation in the first place. I’m interested in finding ways to avoid having some optimistic altruists strong advocating for greatly increased population and pessimists advocating for greatly decreased population in the first place. Of course, not discussing population is not an entirely satisfactory solution so perhaps we should just try to emphasise harmony more.
Thanks. Sorry if this post was too aggressive—I didn’t mean to strongly advocate a single theory; I just wanted to discuss something other than “which theory is correct?”, and this topic seemed like the best choice. I’d like to write more articles about how population changes can be evaluated, so I’m wondering where to go next. Here’s what I can think of:
Don’t bother: There’s not enough agreement on population ethics for it to be worth discussing at all.
Be more abstract: Talking about population-ethical theories is more productive than I think it is, and I should focus on exploring a broader range of theories. I could write a post introducing the basic concepts and explain how they’re relevant to effective altruism.
Using a different theory: The most popular theory (which one?) isn’t compatible with a critical level, so I should focus on implications of that one.
Change my assumptions slightly: I could estimate individual’s utility differently (like use HDI instead of GDP), or value population changes somewhat differently.
Continue with this: Go into more detail, do more realistic calculations, and justify my assumptions better.
I don’t think you fell into a trap of appearing aggressive or being a narrow-minded advocate of one theory. It was also a good post and had some interesting conclusions. I just think there are some traps in nearby areas, relating to upsetting altruists who have more optimistic or pessimist views, or being divisive, which—if you want—you can ward against by privately circulating any potentially controversial draft. From your options, 1,4 and 5 seem fine. 3 seems especially useful. Rather than take this approach: “Throughout this article, I’ll assume you agree with Broome’s article”, it might be better to try to perform at least some small amount of synthesis of different views. For instance, some standard (and useful) caveats for a CLU analysis would be that:
CLU has some unintuitive conclusions (sadistic conclusion)
people’s preferences for living or procreating may give reasons for letting people do these things
prioritarianism or Rawlsian justice based views may encourage you to care more about improving the lives of those who are worst-off
My experience with abstract population ethics discussions, at least on online forums, is that they generate more heat than light; they rarely tell you what you should do differently, or even how to change your decision-making process. I tried to strike a balance between making strong ethical assumptions, which few would agree with, and being too abstract, which wouldn’t be practically useful.
So I didn’t try to justify my ethical assumptions, and just did calculations with the theory I thought the most people would agree with. Using a critical level is compatible with theories that don’t use any “person-affecting principle” and value improvements to future people’s lives just as much as improvements to current people’s lives.
I chose to use critical-level utilitarianism because it’s very general; total utilitarianism, number-dampened utilitarianism, and average utilitarianism are all equivalent for small changes if you use the right critical level. If population or average well-being change too much they’re not equivalent, but as an individual effective altruist you can’t change the world enough for that to matter—even a 1% change in global population or GDP is a huge number.
Actually, I think that facilitating moral trade is a major advantage of using a “critical level”. The “utility-maximizing” compromise between two altruists who disagree on the critical level is to take the average of the levels. (However, this requires altruists to be sincere about their preferences, and to agree on everything else except population ethics.) The two altruists don’t even have to be critical-level utilitarians for this to work. For example, someone with a total view (critical level ~= $300) and someone with an average view (effective critical level ~= $3,000) could agree to use a critical level of $1,000 for their donation decisions.
Thanks. Sorry if this post was too aggressive—I didn’t mean to strongly advocate a single theory; I just wanted to discuss something other than “which theory is correct?”, and this topic seemed like the best choice. I’d like to write more articles about how population changes can be evaluated, so I’m wondering where to go next. Here’s what I can think of:
Don’t bother: There’s not enough agreement on population ethics for it to be worth discussing at all.
Be more abstract: Talking about population-ethical theories is more productive than I think it is, and I should focus on exploring a broader range of theories. I could write a post introducing the basic concepts and explain how they’re relevant to effective altruism.
Using a different theory: The most popular theory (which one?) isn’t compatible with a critical level, so I should focus on implications of that one.
Change my assumptions slightly: I could estimate individual’s utility differently (like use HDI instead of GDP), or value population changes somewhat differently.
Continue with this: Go into more detail, do more realistic calculations, and justify my assumptions better.
Which option do you think is the best? I want to write about whatever people are most interested in, as long as it’s related, since I think this is what I’m most knowledgeable about. But I don’t have a great idea of what that is, so any suggestions would be greatly appreciated!
Calling this “critical level utilitarianism” opens you to concerns raised by Ryan (which I share) and doesn’t seem to buy you anything.
Just say that $300 is the point at which life is worth living (i.e. that’s the point at which utility is zero). Then you don’t run into weird crap like “someone making $200 per year has a life that’s worth living for them, but makes society a worse place.”
(I would call this point the “neutral level” if you’re looking for terminology.)
Yes, it’s some consolation that CLU is a family of standards that includes standard TU.
Yep, the concern here is that we would get into that situation in the first place. I’m interested in finding ways to avoid having some optimistic altruists strong advocating for greatly increased population and pessimists advocating for greatly decreased population in the first place. Of course, not discussing population is not an entirely satisfactory solution so perhaps we should just try to emphasise harmony more.
I don’t think you fell into a trap of appearing aggressive or being a narrow-minded advocate of one theory. It was also a good post and had some interesting conclusions. I just think there are some traps in nearby areas, relating to upsetting altruists who have more optimistic or pessimist views, or being divisive, which—if you want—you can ward against by privately circulating any potentially controversial draft. From your options, 1,4 and 5 seem fine. 3 seems especially useful. Rather than take this approach: “Throughout this article, I’ll assume you agree with Broome’s article”, it might be better to try to perform at least some small amount of synthesis of different views. For instance, some standard (and useful) caveats for a CLU analysis would be that:
CLU has some unintuitive conclusions (sadistic conclusion)
people’s preferences for living or procreating may give reasons for letting people do these things
prioritarianism or Rawlsian justice based views may encourage you to care more about improving the lives of those who are worst-off