My donations are joint with my partner. We have different moral frameworks.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
Once you’re looking at any set of donations which you cannot entirely control (which in fact includes your own donations, accounting for different beliefs at different times), thinking in terms of portfolios, trade-offs, and balancing acts makes sense.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
For a concrete example, I assign non-trivial probability to coming round the view that animal suffering is really really important within the next 10 years. So out of deference to my future self (who, all else being equal, is probably smarter and better-informed than I am) I’d like to avoid interventions that are very bad for animals, in Carl’s sense of ‘very bad’.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
That’s one actual real-world example of why I think in these terms. I could come up with many others if so desired; the framework is powerful.
Yes, and this is a special case of people with different goals trying to fit together. My point was about individual agents’ goals.
I don’t think so. If you can’t control certain donations then they’re irrelevant to your decision.
This doesn’t seem right—if you got terminal cancer, presumably you wouldn’t consider that a good reason to suddenly ignore animals. Rather, you are uncertain about animals’ moral value. So what you should do is give your best-guess, most-informed estimate about animal value and rely on that. If you expect a high chance that you will find reasons to care about animals more, but a low chance that you will find reasons to care about animals less, then your current estimate is too low, and you should start caring more about animals right now until you have an unbiased estimator where the chances of being wrong are the same in either direction.
In such a case, you should donate to whichever charity maximizes value under this framework, and it isn’t reasonable to expect to be likely to change beliefs in any particular direction.
Sure, please do.