EA is about (in part) extrapolating from what you would do in the near to what you should do in the far. The classic introduction hypothetical is the person drowning right in front of you. Most people’s moral instincts are that they should suffer some costs to save the person’s life. Ergo, they should suffer some costs to save starving people across the world.
If you were to poll the world about whether people think it’s right or wrong to steal one of the two dozen turkeys from the rich man and give it to the starving family, I suspect a sizable percentage would say it’s right—or at least not wrong. You might not, but I hardly think that would be a rare response. My point is that extrapolating from that moral premise leads you to very counter-intuitive places.
EA is about (in part) extrapolating from what you would do in the near to what you should do in the far. The classic introduction hypothetical is the person drowning right in front of you. Most people’s moral instincts are that they should suffer some costs to save the person’s life. Ergo, they should suffer some costs to save starving people across the world.
If you were to poll the world about whether people think it’s right or wrong to steal one of the two dozen turkeys from the rich man and give it to the starving family, I suspect a sizable percentage would say it’s right—or at least not wrong. You might not, but I hardly think that would be a rare response. My point is that extrapolating from that moral premise leads you to very counter-intuitive places.