Someone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn’t give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about?
for a related idea.)
I think this means we have to find other ways of explaining/modeling charity giving, including the kind encouraged in the EA community.
As a specific case, counterfactual donation matches should cause you to donate more, too.
It could be the case that people’s utility functions are pretty sharp near X% of income, so that new information makes little difference. They’re probably directly valuing giving X% of income, perhaps as a personal goal. Some might think that they are spending as much as they want on themselves, and the rest should go to charity.
Someone who is vNM-rational with a utility function that is partly-altruistic/partly-selfish wouldn’t give a fixed percentage of their income to charity (or have a lower bound on giving, like 10%), because such a person would dynamically adjust their relative spending on selfish interests and altruistic causes depending on empirical contingencies, for example spending more on altruistic causes when new evidence arises that shows altruistic causes are more cost-effective than previously expected, and conversely lowering spending on altruistic causes if they become less cost-effective than previously expected. (See Is the potential astronomical waste in our universe too small to care about? for a related idea.)
I think this means we have to find other ways of explaining/modeling charity giving, including the kind encouraged in the EA community.
As a specific case, counterfactual donation matches should cause you to donate more, too.
It could be the case that people’s utility functions are pretty sharp near X% of income, so that new information makes little difference. They’re probably directly valuing giving X% of income, perhaps as a personal goal. Some might think that they are spending as much as they want on themselves, and the rest should go to charity.
https://slate.com/human-interest/2011/01/go-ahead-give-all-your-money-to-charity.html
Or maybe their utility functions just change with new information?