If morality isn’t real, then perhaps we should just care about our selves.
But suppose we do decide to care about other people’s interests—maybe not completely, but at least to some degree. To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible and this is what utilitarianism does.
So you don’t want to raise your kids so that they can achieve their highest potential? Or if you’re training for a 5K/half-marathon, and you don’t want to make the best use of your time training? You don’t want to get your maximal PR? I digress.
I do not believe in all the ideas, especially about MIRI (AI risk). Although, in my mind, EA is just getting the biggest bang for your buck. Donating is huge! And organizations, such as GiveWell, are just tools. Sure, I could scour GuideStar and evaluate and compare 990 forms—but why go though all the hassle?
Anyway, honestly it doesn’t really matter that people call themselves “effective altruists.” And the philosophical underpinnings—which are built to be utilitarian independent—seem after the fact. “Effective Altruism” is just a label really; so we can be on the same general page: Effective Altruism has Five Serious Flaws—Avoid It—Be a DIY Philanthropist Instead
There’s some statistic out there that says two-thirds or something of donors do no research at all into the organizations they give to. I hope that some people just wouldn’t give at all ~ nonmalfeasance.
If morality isn’t real, then perhaps we should just care about our selves.
Lila’s argument that “morality isn’t real” also carries over to “self-interest isn’t real”. Or, to be more specific, her argument against being systematic and maximizing EV in moral dilemmas also applies to prudential dilemmas, aesthetic dilemmas, etc.
That said, I agree with you that it’s more important to maximize when you’re dealing with others’ welfare. See e.g. One Life Against the World:
For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.)
Why might it not be obvious? Well, suppose there’s a qualitative duty to save what lives you can—then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend—so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost—and thus passing to the entire world changes little.
I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world—not to be confused with pretend rhetorical saving the world—it is as if they had saved an intergalactic civilization.
Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I’m nearby, within reach, so I leap forward and drag one child off the railroad tracks—and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. “Quick!” you scream to me. “Do something!” But (I call back) I already saved one child from the train tracks, and thus I am “unimaginably” far ahead on points. Whether I save the second child, or not, I will still be credited with an “unimaginably” good deed. Thus, I have no further motive to act. Doesn’t sound right, does it?
Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don’t think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives.
If morality isn’t real, then perhaps we should just care about our selves.
But suppose we do decide to care about other people’s interests—maybe not completely, but at least to some degree. To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible and this is what utilitarianism does.
I don’t think I do anything in my life to the maximal extent possible
So you don’t want to raise your kids so that they can achieve their highest potential? Or if you’re training for a 5K/half-marathon, and you don’t want to make the best use of your time training? You don’t want to get your maximal PR? I digress.
I do not believe in all the ideas, especially about MIRI (AI risk). Although, in my mind, EA is just getting the biggest bang for your buck. Donating is huge! And organizations, such as GiveWell, are just tools. Sure, I could scour GuideStar and evaluate and compare 990 forms—but why go though all the hassle?
Anyway, honestly it doesn’t really matter that people call themselves “effective altruists.” And the philosophical underpinnings—which are built to be utilitarian independent—seem after the fact. “Effective Altruism” is just a label really; so we can be on the same general page: Effective Altruism has Five Serious Flaws—Avoid It—Be a DIY Philanthropist Instead
There’s some statistic out there that says two-thirds or something of donors do no research at all into the organizations they give to. I hope that some people just wouldn’t give at all ~ nonmalfeasance.
Lila’s argument that “morality isn’t real” also carries over to “self-interest isn’t real”. Or, to be more specific, her argument against being systematic and maximizing EV in moral dilemmas also applies to prudential dilemmas, aesthetic dilemmas, etc.
That said, I agree with you that it’s more important to maximize when you’re dealing with others’ welfare. See e.g. One Life Against the World: