[TL;DR: I guess taking uncertainty seriously takes the sting out of population ethics. And it’s usually a good heuristics for counterintuitive quasi-fanatical conclusions.]
Thanks for the post. But I don’t think it works. Your point is that there should be no controversy over RC, because all it implies is that zillions of people living a tranquil nirvana is better than 10 bi (or n) people living in blissful paradise...? It looks like you basically just set a higher bar for what counts as positive value. One could argue along the same line saying that 2 people in bliss is worse than 1k living in nirvana.
The problem is:
Parfit called it “repugnant” for a reason, and he tried to find arguments against it. I don’t see the point of making it kind of trivial… Actually, the philosophically interesting point is to explain why it is controversial. I suspect easy answers tend to be wrong here.
I don’t live in a nirvana-like state, but I still think my life has positive value. I can totally reckon there are lives of negative value, and I can imagine my life becoming one. But it’s interesting to acknowledge that other people think about this differently.
My personal take so far is that RC is right: there is a *logically possible* world w with a huge population m with lives barely worth living which is better than a world with n people living in ultimate bliss. For instance, I’m pretty confident that 10 bi living in nirvana is better than 2 people living in bliss; just like 10^18 people in nirvana is probably better than 10bi living in bliss.
But as the comparison between the corresponding worlds gets harder, for a determinate world w’, it might become more unlikely that you can prove that U(w ) > U(w’ ) – i.e., that you can justifiably believe that such-and-such w and w’ are those worlds. Uncertainty increases with each step in the chain of your argument.
Plus, I think that a good (and old) heuristics for consequentialists to avoid counterintuitive solutions is to consider uncertainty, and check if the conclusion remains sound – especially if you have something like “inductive steps”. E.g.: Sure, a world w where 1 person is tortured for 50y for n others to live in bliss is better than our current world… but how certain can you be that such-and-suchprecise world is w?
[TL;DR: I guess taking uncertainty seriously takes the sting out of population ethics. And it’s usually a good heuristics for counterintuitive quasi-fanatical conclusions.]
Thanks for the post. But I don’t think it works. Your point is that there should be no controversy over RC, because all it implies is that zillions of people living a tranquil nirvana is better than 10 bi (or n) people living in blissful paradise...? It looks like you basically just set a higher bar for what counts as positive value. One could argue along the same line saying that 2 people in bliss is worse than 1k living in nirvana.
The problem is:
Parfit called it “repugnant” for a reason, and he tried to find arguments against it. I don’t see the point of making it kind of trivial… Actually, the philosophically interesting point is to explain why it is controversial. I suspect easy answers tend to be wrong here.
I don’t live in a nirvana-like state, but I still think my life has positive value. I can totally reckon there are lives of negative value, and I can imagine my life becoming one. But it’s interesting to acknowledge that other people think about this differently.
I invite you to take uncertainty seriously. Some argue that, because of that, RC doesn’t have “any probative force”, particularly if some comparisons are highly imprecise or impossible
My personal take so far is that RC is right: there is a *logically possible* world w with a huge population m with lives barely worth living which is better than a world with n people living in ultimate bliss. For instance, I’m pretty confident that 10 bi living in nirvana is better than 2 people living in bliss; just like 10^18 people in nirvana is probably better than 10bi living in bliss.
But as the comparison between the corresponding worlds gets harder, for a determinate world w’, it might become more unlikely that you can prove that U(w ) > U(w’ ) – i.e., that you can justifiably believe that such-and-such w and w’ are those worlds. Uncertainty increases with each step in the chain of your argument.
Plus, I think that a good (and old) heuristics for consequentialists to avoid counterintuitive solutions is to consider uncertainty, and check if the conclusion remains sound – especially if you have something like “inductive steps”. E.g.: Sure, a world w where 1 person is tortured for 50y for n others to live in bliss is better than our current world… but how certain can you be that such-and-such precise world is w?