Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
My donations are joint with my partner. We have different moral frameworks.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Nope, ‘little bit bad’ is just relative to other interventions designed to work through that moral framework. No judgement about which system is better or more important is necessary.
Sure, but once you choose to act within a single moral framework, does pairing charities off into portfolios make any sense at all? Nope.
My donations are joint with my partner. We have different moral frameworks. EA at large has a wide variety of moral frameworks. And my moral framework now is likely different to my moral framework 10 years down the line which will in turn be different to my framework 20 years down the line.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’. But his argument highlights why I shouldn’t worry so much about AMF being just a little bit bad on that front, relative to interventions designed to work in that field, because in the event that I do come round to that point of view I’ll be able to overwhelm that badness with relatively small donations to animal charities that my future self will presumably want to make anyway. And, most importantly, this will very likely continue to be true regardless of whether AMF turns out to be net positive or net negative for overall suffering according to my future self.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
Sure, please do.