My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
This simply begs the question: “helping” and “people” are heavily indeterminate concepts, the imputation of content to which is heavily consequential for the action-guidance that follows.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
Suffice it to say that nearly all utilitarians are intuitionists today, which I honestly can’t take seriously as an independent reason for action, and is a standard by which utilitarianism sowed its own death—any and all forms of utilitarianism entail serious counter-intuition.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
it seriously effects the evaluation of outcomes (i.e. the xrisk community...)
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.
My philosophical background is that of the physics stereotype that utterly loathes most academic philosophy, so I’m not sure if this discussion will be all that fruitful. Still I’ll give this a go.
At some pretty deep level, I just don’t care. I treat statements like “It is better if people get vaccinated” or “It is better if people in malaria-prone areas sleep under bednets” as almost axiomatic, and that’s my start-off point for working out where to donate. If there are lots of philosophers out there who disagree, well that’s disappointing to me, but it’s not really so bad, because there are plenty of non-philosophers out there.
The utilitarian bits of my morality do certainly come out of intuition, whether it’s of the “It is better if people get vaccinated” form or by considering amusingly complicated trolley problems as in Peter Unger’s Living High and Letting Die. And when you carry through the logic to a counter-intuitive conclusion like “You should donate a large chunk of your money to effective charity” then I bite that bullet and donate; and when you carry through the logic to conclude that you should cut up an innocent person for their organs, I say “Nope”. I don’t know anyone who strictly adheres to a pure form of any moral system; I don’t know of any moral system that doesn’t throw up some wildly counter-intuitive conclusions; I am completely OK with using intuition as an input to judging moral dilemmas; I don’t consider any of this a problem.
Yeah, the presence of futurist AI stuff in the EA community (and also its increasing prominence) is a surprise to me. I think it should be a sort of strange cousin, a group of people with a similar propensity to bite bullets as the rest of the EA community, but with some different axioms that lead them far away from the rest of us.
If you want to say that this is a consequence of utilitarian-type thinking, then I agree. But I’m not going to throw out cost-effectiveness calculations and basic axioms like “helping two better is better than helping one” just because there are people considering world dictators controlling a nano-robot future or whatever.