As I said on facebook, I think this mostly goes away (leaving a rather non-speculative case) if one puts even a little weight on special obligations to people in our generation:
AMF clearly saves lives in the short run. If you give that substantial weight rather than evaluating everything solely from a “view from nowhere” long run perspective where future populations are overwhelmingly important, then it is clear AMF is good. It is an effective way to help poor people today and unlikely to be a comparably exceptional way to make the long run worse. If you were building a portfolio to do well on many worldviews or for moral trade it would be a strong addition. You can avoid worry about the sign of its long run effects by remembering relative magnitude.
I was just thinking about this again and I don’t believe it works.
Suppose we want to maximize expected value over multiple value systems. Let’s say there’s a 10% chance that we should only care about the current generation, and a 90% chance that generational status isn’t morally relevant (obviously this is a simplification but I believe the result generalizes). Then the expected utility of AMF is
You could say it’s wrong to maximize expected utility across multiple value systems, but I don’t see how you can make reasonable decisions at all if you’re not trying to maximize expected utility. If you’re trying to “diversify” across multiple value systems then you’re doing something that’s explicitly bad according to a linear consequentialist value system, and you’d need some justification for why diversifying across value systems is better than maximizing expected value over value systems.
I am not familiar with the moral uncertainty literature, but in my mind it would make sense to define the utility scale of each welfare theory such that the difference in utility between the best and worst possible state is always the same. For example, always assigning 1 to the best possible state, and −1 to the worst possible state. In this case, the weights of each welfare theory would represent their respective strength/plausibility, and therefore not be arbitrary?
Okay, can you tell me if I’m understanding this correctly?
Say my ethical probability distribution is 10% prior existence utilitarianism and 90% total utilitarianism. Then the prior existence segment (call it P) gets $1 and the total existence segment (call it T) gets $9. P wants me to donate everything to AMF and T wants me to donate everything to MIRI, so I should donate $1 to AMF and $9 to MIRI. So that means people are justified in donating some portion of their budget to AMF, but not all unless they believe AMF also is the best charity for helping future generations.*
Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people’s utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don’t go any way to suggesting that we can ignore them. To do this they’d have to show that future people’s moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our generation, and say nothing about our obligations to people living in the year 3000AD vs people living in the year 30,000AD. [Maybe i’m missing an argument here?!] Thus any plausible moral theory will such that the calculation will be dominated by very long term effects, and long term effects will dominate our decision making process.
Why would we put more weight on current generations, though? I’ve never seen a good argument for that. Surely there’s no meaningful moral difference between faraway, distant, unknown people alive today and faraway, distant, unknown people alive tomorrow. I can’t think of any arguments for charitable distribution which would fall apart in the case of people living in a different generation, or any arguments for agent relative moral value which depend specifically on someone living at the same time as you, or anything of the sort. Even if you believe that moral uncertainty is a meaningful issue, you still need reasons to favor one possibility over countervailing possibilities that cut in opposite directions.
It is an effective way to help poor people today and unlikely to be a comparably exceptional way to make the long run worse. If you were building a portfolio to do well on many worldviews or for moral trade it would be a strong addition.
If we assign value to future people then it could very well be an exceptional way to make the long run worse. We don’t even have to give future people equal value, we just have to let future people’s value have equal potential to aggregate, and you have the same result.
You can avoid worry about the sign of its long run effects by remembering relative magnitude.
Morality only provides judgements of one act or person over another. Morality doesn’t provide any appeal to a third, independent “value scale”, so it doesn’t make sense to try to cross-optimize across multiple moral systems. I don’t think there is any rhyme or reason to saying that it’s okay to have 1 unit of special obligation moral value at the expense of 10 units of time-egalitarian moral value, or 20 units, or anything of the sort.
So you’re saying that basically “this action is really good according to moral system A, and only a little bit bad according to moral system B, so in this case moral system A dominates.” But these descriptors of something being very good or slightly bad only mean anything in reference to other moral outcomes within that moral system. It’s like saying “this car is faster than that car is loud”.
Carl’s point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we can’t find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being “a little bit bad” or “very bad” are only meaningful in reference to that moral system.
My point is that it’s not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldn’t make sense to maximize anything else, whereas if AMF doesn’t maximize moral value then you shouldn’t give it any money at all. So yes it will work but it won’t be the morally optimal thing to do.
Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
My donations are joint with my partner. We have different moral frameworks.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
As I said on facebook, I think this mostly goes away (leaving a rather non-speculative case) if one puts even a little weight on special obligations to people in our generation:
I was just thinking about this again and I don’t believe it works.
Suppose we want to maximize expected value over multiple value systems. Let’s say there’s a 10% chance that we should only care about the current generation, and a 90% chance that generational status isn’t morally relevant (obviously this is a simplification but I believe the result generalizes). Then the expected utility of AMF is
Far future effects still dominate.
You could say it’s wrong to maximize expected utility across multiple value systems, but I don’t see how you can make reasonable decisions at all if you’re not trying to maximize expected utility. If you’re trying to “diversify” across multiple value systems then you’re doing something that’s explicitly bad according to a linear consequentialist value system, and you’d need some justification for why diversifying across value systems is better than maximizing expected value over value systems.
The scaling factors there are arbitrary. I can throw in theories that claim things are infinitely important.
This view is closer to ‘say that views you care about got resources in proportion to your attachment to/credence in them, then engage in moral trade from that point.’
Hi Carl,
I am not familiar with the moral uncertainty literature, but in my mind it would make sense to define the utility scale of each welfare theory such that the difference in utility between the best and worst possible state is always the same. For example, always assigning 1 to the best possible state, and −1 to the worst possible state. In this case, the weights of each welfare theory would represent their respective strength/plausibility, and therefore not be arbitrary?
Okay, can you tell me if I’m understanding this correctly?
Say my ethical probability distribution is 10% prior existence utilitarianism and 90% total utilitarianism. Then the prior existence segment (call it P) gets $1 and the total existence segment (call it T) gets $9. P wants me to donate everything to AMF and T wants me to donate everything to MIRI, so I should donate $1 to AMF and $9 to MIRI. So that means people are justified in donating some portion of their budget to AMF, but not all unless they believe AMF also is the best charity for helping future generations.*
This is a nice idea but I worry it won’t work.
Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people’s utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don’t go any way to suggesting that we can ignore them. To do this they’d have to show that future people’s moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our generation, and say nothing about our obligations to people living in the year 3000AD vs people living in the year 30,000AD. [Maybe i’m missing an argument here?!] Thus any plausible moral theory will such that the calculation will be dominated by very long term effects, and long term effects will dominate our decision making process.
Why would we put more weight on current generations, though? I’ve never seen a good argument for that. Surely there’s no meaningful moral difference between faraway, distant, unknown people alive today and faraway, distant, unknown people alive tomorrow. I can’t think of any arguments for charitable distribution which would fall apart in the case of people living in a different generation, or any arguments for agent relative moral value which depend specifically on someone living at the same time as you, or anything of the sort. Even if you believe that moral uncertainty is a meaningful issue, you still need reasons to favor one possibility over countervailing possibilities that cut in opposite directions.
If we assign value to future people then it could very well be an exceptional way to make the long run worse. We don’t even have to give future people equal value, we just have to let future people’s value have equal potential to aggregate, and you have the same result.
Morality only provides judgements of one act or person over another. Morality doesn’t provide any appeal to a third, independent “value scale”, so it doesn’t make sense to try to cross-optimize across multiple moral systems. I don’t think there is any rhyme or reason to saying that it’s okay to have 1 unit of special obligation moral value at the expense of 10 units of time-egalitarian moral value, or 20 units, or anything of the sort.
So you’re saying that basically “this action is really good according to moral system A, and only a little bit bad according to moral system B, so in this case moral system A dominates.” But these descriptors of something being very good or slightly bad only mean anything in reference to other moral outcomes within that moral system. It’s like saying “this car is faster than that car is loud”.
Carl’s point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we can’t find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being “a little bit bad” or “very bad” are only meaningful in reference to that moral system.
My point is that it’s not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldn’t make sense to maximize anything else, whereas if AMF doesn’t maximize moral value then you shouldn’t give it any money at all. So yes it will work but it won’t be the morally optimal thing to do.
Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
Sure, but once you choose to act within a single moral framework, does pairing charities off into portfolios make any sense at all? Nope.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
Sure, please do.