Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.
You seem to be assuming a maximize-expected-choiceworthiness or a my-favorite-theory rule for dealing with moral uncertainty. There are other plausible rules, such as a moral parliament model, which could endorse splitting.
Iām definitely not assuming the my-favorite-theory rule.
I agree that what Iām describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you donāt use it.
Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charityāI donāt see any moral trade the party with less credence/āvoting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?
edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?
I donāt know how philosophically sound they are, but the following rules, taken from the RP moral parliament tool, would end up splitting donations among multiple causes:
Maximize Minimum; āSometimes termed the āRawlsian Social Welfare Functionā, this method maximizes the payoff for the least-satisfied worldview. This method treats utilities for all worldviews as if they fall on the same scale, despite the fact that some worldviews see more avenues for value than others. The number of parliamentarians assigned to each worldview doesnāt matter because the least satisfied parliamentarian is decisive.ā
Moral Marketplace: āThis method gives each parliamentarian a slice of the budget to allocate as they each see fit, then combines eachās chosen allocation into one shared portfolio. This process is relatively insensitive to considerations of decreasing cost-effectiveness. For more formal details, see this paper.ā
There are a few other other voting/ābargaining style views they have that can also lead to splitting.
I donāt really have anything intelligent to say about whether or not it makes sense to apply these rules for individual donations, or whether these rules make sense at all, but I thought they were worth mentioning.
Thank you very much, I hadnāt seen that the moral parliament calculator had implemented all of those.
Moral Marketplace strikes me as quite dubious in the context of allocating a single personās donations, though Iām not sure itās totally illogical.
Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they donāt, e.g. āthe universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfareā. You can be 99.9999...% sure this isnāt true but never 100% sure, so this gets a seat in your parliament.
Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.
You seem to be assuming a maximize-expected-choiceworthiness or a my-favorite-theory rule for dealing with moral uncertainty. There are other plausible rules, such as a moral parliament model, which could endorse splitting.
Iām definitely not assuming the my-favorite-theory rule.
I agree that what Iām describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you donāt use it.
Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charityāI donāt see any moral trade the party with less credence/āvoting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?
edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?
I donāt know how philosophically sound they are, but the following rules, taken from the RP moral parliament tool, would end up splitting donations among multiple causes:
Maximize Minimum; āSometimes termed the āRawlsian Social Welfare Functionā, this method maximizes the payoff for the least-satisfied worldview. This method treats utilities for all worldviews as if they fall on the same scale, despite the fact that some worldviews see more avenues for value than others. The number of parliamentarians assigned to each worldview doesnāt matter because the least satisfied parliamentarian is decisive.ā
Moral Marketplace: āThis method gives each parliamentarian a slice of the budget to allocate as they each see fit, then combines eachās chosen allocation into one shared portfolio. This process is relatively insensitive to considerations of decreasing cost-effectiveness. For more formal details, see this paper.ā
There are a few other other voting/ābargaining style views they have that can also lead to splitting.
I donāt really have anything intelligent to say about whether or not it makes sense to apply these rules for individual donations, or whether these rules make sense at all, but I thought they were worth mentioning.
Thank you very much, I hadnāt seen that the moral parliament calculator had implemented all of those.
Moral Marketplace strikes me as quite dubious in the context of allocating a single personās donations, though Iām not sure itās totally illogical.
Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they donāt, e.g. āthe universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfareā. You can be 99.9999...% sure this isnāt true but never 100% sure, so this gets a seat in your parliament.