Thanks! This is a great set of context and a great way to ask for specifics. :-)
I think the situation is like this: I’m hypothetically in a position to exercise a lot of power over reproductive choices—perhaps by backing tax plans which either reward or punish having children. I think what you’re asking is “suppose you know that your plan to offer a child tax credit will result in a miserable population, should you stay with the plan because there’ll be so many miserable people that it’ll be better on utilitarian grounds”? The answer is no, I should not do that. I shouldn’t exercise power I have to make a world which I believe will contain a lot of miserable people.
I think a better power-inversion question is: “suppose you are given dictatorial control of one million miserable and hungry people. Should you slaughter 999,000 of them so the other 1000 can be well fed and happy.” My answer is, again, unsurprisingly, No. No, I shouldn’t use dictatorial power to genocide this unhappy group. Instead I should use it to implement policies I think will lead over time to a sustainable 1000-member happy population, perhaps by the same kind of anti-natalist policies that would in other happier circumstances be abhorrent.
My suspicion I think I share with you: that consequentialism’s advice is imperfect. My sense is it is imperfect mostly not because of unfamiliar galactic-scale reasons or other failures in reacting to odd situations involving unbelievably powerful political forces. If that’s where it broke down it’d be mostly immaterial to considering alternatives to consequentialism in everyday situations (IMO).
Thanks! This is a great set of context and a great way to ask for specifics. :-)
I think the situation is like this: I’m hypothetically in a position to exercise a lot of power over reproductive choices—perhaps by backing tax plans which either reward or punish having children. I think what you’re asking is “suppose you know that your plan to offer a child tax credit will result in a miserable population, should you stay with the plan because there’ll be so many miserable people that it’ll be better on utilitarian grounds”? The answer is no, I should not do that. I shouldn’t exercise power I have to make a world which I believe will contain a lot of miserable people.
I think a better power-inversion question is: “suppose you are given dictatorial control of one million miserable and hungry people. Should you slaughter 999,000 of them so the other 1000 can be well fed and happy.” My answer is, again, unsurprisingly, No. No, I shouldn’t use dictatorial power to genocide this unhappy group. Instead I should use it to implement policies I think will lead over time to a sustainable 1000-member happy population, perhaps by the same kind of anti-natalist policies that would in other happier circumstances be abhorrent.
My suspicion I think I share with you: that consequentialism’s advice is imperfect. My sense is it is imperfect mostly not because of unfamiliar galactic-scale reasons or other failures in reacting to odd situations involving unbelievably powerful political forces. If that’s where it broke down it’d be mostly immaterial to considering alternatives to consequentialism in everyday situations (IMO).