Why would we put more weight on current generations, though? I’ve never seen a good argument for that. Surely there’s no meaningful moral difference between faraway, distant, unknown people alive today and faraway, distant, unknown people alive tomorrow. I can’t think of any arguments for charitable distribution which would fall apart in the case of people living in a different generation, or any arguments for agent relative moral value which depend specifically on someone living at the same time as you, or anything of the sort. Even if you believe that moral uncertainty is a meaningful issue, you still need reasons to favor one possibility over countervailing possibilities that cut in opposite directions.
It is an effective way to help poor people today and unlikely to be a comparably exceptional way to make the long run worse. If you were building a portfolio to do well on many worldviews or for moral trade it would be a strong addition.
If we assign value to future people then it could very well be an exceptional way to make the long run worse. We don’t even have to give future people equal value, we just have to let future people’s value have equal potential to aggregate, and you have the same result.
You can avoid worry about the sign of its long run effects by remembering relative magnitude.
Morality only provides judgements of one act or person over another. Morality doesn’t provide any appeal to a third, independent “value scale”, so it doesn’t make sense to try to cross-optimize across multiple moral systems. I don’t think there is any rhyme or reason to saying that it’s okay to have 1 unit of special obligation moral value at the expense of 10 units of time-egalitarian moral value, or 20 units, or anything of the sort.
So you’re saying that basically “this action is really good according to moral system A, and only a little bit bad according to moral system B, so in this case moral system A dominates.” But these descriptors of something being very good or slightly bad only mean anything in reference to other moral outcomes within that moral system. It’s like saying “this car is faster than that car is loud”.
Carl’s point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we can’t find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being “a little bit bad” or “very bad” are only meaningful in reference to that moral system.
My point is that it’s not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldn’t make sense to maximize anything else, whereas if AMF doesn’t maximize moral value then you shouldn’t give it any money at all. So yes it will work but it won’t be the morally optimal thing to do.
Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
My donations are joint with my partner. We have different moral frameworks.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Why would we put more weight on current generations, though? I’ve never seen a good argument for that. Surely there’s no meaningful moral difference between faraway, distant, unknown people alive today and faraway, distant, unknown people alive tomorrow. I can’t think of any arguments for charitable distribution which would fall apart in the case of people living in a different generation, or any arguments for agent relative moral value which depend specifically on someone living at the same time as you, or anything of the sort. Even if you believe that moral uncertainty is a meaningful issue, you still need reasons to favor one possibility over countervailing possibilities that cut in opposite directions.
If we assign value to future people then it could very well be an exceptional way to make the long run worse. We don’t even have to give future people equal value, we just have to let future people’s value have equal potential to aggregate, and you have the same result.
Morality only provides judgements of one act or person over another. Morality doesn’t provide any appeal to a third, independent “value scale”, so it doesn’t make sense to try to cross-optimize across multiple moral systems. I don’t think there is any rhyme or reason to saying that it’s okay to have 1 unit of special obligation moral value at the expense of 10 units of time-egalitarian moral value, or 20 units, or anything of the sort.
So you’re saying that basically “this action is really good according to moral system A, and only a little bit bad according to moral system B, so in this case moral system A dominates.” But these descriptors of something being very good or slightly bad only mean anything in reference to other moral outcomes within that moral system. It’s like saying “this car is faster than that car is loud”.
Carl’s point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we can’t find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being “a little bit bad” or “very bad” are only meaningful in reference to that moral system.
My point is that it’s not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldn’t make sense to maximize anything else, whereas if AMF doesn’t maximize moral value then you shouldn’t give it any money at all. So yes it will work but it won’t be the morally optimal thing to do.
Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
Sure, but once you choose to act within a single moral framework, does pairing charities off into portfolios make any sense at all? Nope.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
Sure, please do.