Carlâs point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we canât find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being âa little bit badâ or âvery badâ are only meaningful in reference to that moral system.
My point is that itâs not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldnât make sense to maximize anything else, whereas if AMF doesnât maximize moral value then you shouldnât give it any money at all. So yes it will work but it wonât be the morally optimal thing to do.
Nope, âlittle bit badâ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once youâre looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) Iâd like to avoid interventions that are very bad for animals, in Carlâs sense of âvery badâ. But his argument highlights why I shouldnât worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view Iâll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
Thatâs one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
My donations are joint with my partner. We have different moral frameworks.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agentsâ goals.
Once youâre looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
I donât think so. If you canât control certain donations then theyâre irrelevant to your decision.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) Iâd like to avoid interventions that are very bad for animals, in Carlâs sense of âvery badâ.
This doesnât seem rightâif you got terminal cancer, presumably you wouldnât consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animalsâ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isnât reasonable to expect to be likely to change beliefs in any particular direction.
Thatâs one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Carlâs point, though not fully clarified above, is that you can just pick a different intervention that does well on moral system B and is only a little bit bad according to A, pair it off with AMF, and now you have a portfolio that is great according to both systems. For this not to work AMF would have to be particularly bad according to B (bad enough that we canât find something to cancel it out), rather than just a little bit bad. Which a priori is rather unlikely.
AMF being a little bad for x risk could mean an EV of thousands or millions of people not living. The problem is that your assumptions of something being âa little bit badâ or âvery badâ are only meaningful in reference to that moral system.
My point is that itâs not coherent to try to optimize multiple moral systems because there is no third scale of meta morality to compare things to. If you want you can assign greater weight to existing people to account for your uncertainty about their moral value, but in no case do you maximize moral value by splitting into multiple causes. If AMF maximizes moral value then it wouldnât make sense to maximize anything else, whereas if AMF doesnât maximize moral value then you shouldnât give it any money at all. So yes it will work but it wonât be the morally optimal thing to do.
Nope, âlittle bit badâ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
Sure, but once you choose to act within a single moral framework, does pairing charities off into portfolios make any sense at all? Nope.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once youâre looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) Iâd like to avoid interventions that are very bad for animals, in Carlâs sense of âvery badâ. But his argument highlights why I shouldnât worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view Iâll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
Thatâs one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agentsâ goals.
I donât think so. If you canât control certain donations then theyâre irrelevant to your decision.
This doesnât seem rightâif you got terminal cancer, presumably you wouldnât consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animalsâ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isnât reasonable to expect to be likely to change beliefs in any particular direction.
Sure, please do.