“But what should we do, then?” Well, we still have reason to respect other values we hold dear — those that were never grounded purely in the impartial good in the first place. Integrity, care for those we love, and generally not being a jerk, for starters. Beyond that, my honest answer is: I don’t know.
You obviously don’t exclude the following, but I would strongly hope that — beyond just integrity, care for those we love, and not being a jerk — we can also at a minimum endorse a commitment to reducing overt and gratuitous suffering taking place around us, even if it might not be the single best thing we can do from the perspective of perfect impartiality across all space and time. This value seems to me to be on a similarly strong footing as the other three you mention, and it doesn’t seem like it stands or falls with perfect [or otherwise very strong] cosmic impartiality. I suspect you agree with its inclusion, but I feel like it deserves emphasis in its own right.
Relatedly, in response to this:
Ask yourself: Does “this strategy seems good when I assume away my epistemic limitations” have the deep moral urgency that drew you to EA in the first place?
I would say “yes”, e.g. if I replace “this strategy” with something like “reducing intense suffering around me seems good [even] when I assume away my epistemic limitations [about long-term cosmic impacts]”. That does at least carry much of the deep moral urgency that motivates me. I mean, just as I can care for those I love without needing to ground it in perfect cosmic impartiality, I can also seek to reduce the suffering of other sentient beings without needing to rely on a maximally impartial perspective.
Thanks for this Magnus, I have complicated thoughts on this point, hence my late reply! To some extent I’ll punt this to a forthcoming Substack post, but FWIW:
As you know, relieving suffering is profoundly important to me. I’d very much like a way to make sense of this moral impulse in our situation (and I intend to reflect on how to do so).
But it’s very important that the problem isn’t that we don’t know “the single best thing” to do. It’s that if I don’t ignore my effects on far-future (etc.) suffering, I have no particular reason to think I’m “relieving suffering” overall. Rather, I’m plausibly increasing or decreasing suffering elsewhere, quite drastically, and I can’t say these effects cancel out in expectation. (Maybe you’re thinking of “Option 3” in this section? If so, I’m curious where you disagree with my response.)
The reason suffering matters so deeply to me is the nature of suffering itself, regardless of where or when it happens — presumably you’d agree. From that perspective, and given the above, I’m not sure I understand the motivation for your view in your second paragraph. (The reasons to do various parochial things, or respect deontological constraints, aren’t like this. They aren’t grounded in something like “this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]”.)
Yeah, my basic point was that just as I don’t think we need to ground a value like “caring for those we love” in whether it has the best consequences across all time and space, I think the same applies to many other instances of caring for and helping individuals — not just those we love.
For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that’s a reasonable practical stance, and I think the spirit of this stance applies to many ways in which we can and do benefit strangers, not just to rare emergencies.
In other words, I was just trying to say that when it comes to reasonable values aimed at helping others, I don’t think it’s a case of “it must be grounded in strong impartiality or bust”. Descriptively, I don’t think that reflects virtually anyone’s actual values or revealed preferences, and I don’t think it’s reasonable from a prescriptive perspective either (e.g. I don’t think it’s reasonable or defensible to abstain from helping a tormented stranger based on cluelessness about the large-scale consequences).
I’ve replied to this in a separate Quick Take. :) (Not sure if you’d disagree with any of what I write, but I found it helpful to clarify my position. Thanks for prompting this!)
Toward the very end, you write:
You obviously don’t exclude the following, but I would strongly hope that — beyond just integrity, care for those we love, and not being a jerk — we can also at a minimum endorse a commitment to reducing overt and gratuitous suffering taking place around us, even if it might not be the single best thing we can do from the perspective of perfect impartiality across all space and time. This value seems to me to be on a similarly strong footing as the other three you mention, and it doesn’t seem like it stands or falls with perfect [or otherwise very strong] cosmic impartiality. I suspect you agree with its inclusion, but I feel like it deserves emphasis in its own right.
Relatedly, in response to this:
I would say “yes”, e.g. if I replace “this strategy” with something like “reducing intense suffering around me seems good [even] when I assume away my epistemic limitations [about long-term cosmic impacts]”. That does at least carry much of the deep moral urgency that motivates me. I mean, just as I can care for those I love without needing to ground it in perfect cosmic impartiality, I can also seek to reduce the suffering of other sentient beings without needing to rely on a maximally impartial perspective.
Thanks for this Magnus, I have complicated thoughts on this point, hence my late reply! To some extent I’ll punt this to a forthcoming Substack post, but FWIW:
As you know, relieving suffering is profoundly important to me. I’d very much like a way to make sense of this moral impulse in our situation (and I intend to reflect on how to do so).
But it’s very important that the problem isn’t that we don’t know “the single best thing” to do. It’s that if I don’t ignore my effects on far-future (etc.) suffering, I have no particular reason to think I’m “relieving suffering” overall. Rather, I’m plausibly increasing or decreasing suffering elsewhere, quite drastically, and I can’t say these effects cancel out in expectation. (Maybe you’re thinking of “Option 3” in this section? If so, I’m curious where you disagree with my response.)
The reason suffering matters so deeply to me is the nature of suffering itself, regardless of where or when it happens — presumably you’d agree. From that perspective, and given the above, I’m not sure I understand the motivation for your view in your second paragraph. (The reasons to do various parochial things, or respect deontological constraints, aren’t like this. They aren’t grounded in something like “this thing out there in the world is horrible, and should be prevented wherever/whenever it is [or whoever causes it]”.)
Yeah, my basic point was that just as I don’t think we need to ground a value like “caring for those we love” in whether it has the best consequences across all time and space, I think the same applies to many other instances of caring for and helping individuals — not just those we love.
For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that’s a reasonable practical stance, and I think the spirit of this stance applies to many ways in which we can and do benefit strangers, not just to rare emergencies.
In other words, I was just trying to say that when it comes to reasonable values aimed at helping others, I don’t think it’s a case of “it must be grounded in strong impartiality or bust”. Descriptively, I don’t think that reflects virtually anyone’s actual values or revealed preferences, and I don’t think it’s reasonable from a prescriptive perspective either (e.g. I don’t think it’s reasonable or defensible to abstain from helping a tormented stranger based on cluelessness about the large-scale consequences).
I’ve replied to this in a separate Quick Take. :) (Not sure if you’d disagree with any of what I write, but I found it helpful to clarify my position. Thanks for prompting this!)