“Unless critics seriously want billionaires to deliberately try to do less good rather than more, it’s hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.”
I don’t think the only alternative to wanting billionaires to actively try to do good is that you would be arguing for the obviously foolish idea that they should be trying to do less good. There might be many reasons you would not want to promote the ideas of billionaires ‘doing more good’. E.g., you believe they have an inordinate amount of power and in actively trying to do good they will ultimately do harm, either by misalignment or mistakes in EA’s ideas of what would do good, even if the person remains aligned (a particular problem when people have certain magnitudes of money/influence that is not such an issue when people have less power/influence, where the damage will be less). You may also just not want to draw such powerful people’s attention to the orders of magnitude more influence they could have.
I think in your statement you are arguing that the possible effect on billionaires is not an argument against EA principles per se and on that I’d agree, but in my view that reasonable side of the argument loses force when paired with what seems like a silly statement, that people would be arguing something that no person would argue.
I think the full section addresses this (but let me know if you disagree), via the following:
Alternatively, if one believes that there are compelling arguments that billionaire philanthropy necessarily does more harm than good, then they might instead conclude that the best thing billionaires can do is voluntarily pay more taxes (i.e., donate to the US Treasury). That would be a surprising result, and I doubt that many actually believe it, but it is at least conceptually possible. But even that is no objection to EA principles, but just a possible implication of them (when combined with unusual empirical assumptions).
The general point (as stressed throughout the paper) being that we need to take total evidence into account. If there’s evidence that “actively trying to do good they will ultimately do harm” then rationally doing good actually entails something different from what you’re imagining when you describe them as “actively trying”. EA principles would imply that we draw billionaires’ attention to these risks, and encourage them to help in whatever ways are actually better in expectation.
Sure, I don’t think what you’re saying is technically incorrect it is just for me rhetorically, I would read you as being less sincere and therefore less convincing in engagement with critics if there seems to be some implication that comes across a bit like ‘unless people believe something stupid, then their critiques don’t make sense’ - but this may also be a reaction to seeing only the excerpted quote and not the whole text
“Unless critics seriously want billionaires to deliberately try to do less good rather than more, it’s hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.”
I don’t think the only alternative to wanting billionaires to actively try to do good is that you would be arguing for the obviously foolish idea that they should be trying to do less good. There might be many reasons you would not want to promote the ideas of billionaires ‘doing more good’. E.g., you believe they have an inordinate amount of power and in actively trying to do good they will ultimately do harm, either by misalignment or mistakes in EA’s ideas of what would do good, even if the person remains aligned (a particular problem when people have certain magnitudes of money/influence that is not such an issue when people have less power/influence, where the damage will be less). You may also just not want to draw such powerful people’s attention to the orders of magnitude more influence they could have.
I think in your statement you are arguing that the possible effect on billionaires is not an argument against EA principles per se and on that I’d agree, but in my view that reasonable side of the argument loses force when paired with what seems like a silly statement, that people would be arguing something that no person would argue.
I think the full section addresses this (but let me know if you disagree), via the following:
The general point (as stressed throughout the paper) being that we need to take total evidence into account. If there’s evidence that “actively trying to do good they will ultimately do harm” then rationally doing good actually entails something different from what you’re imagining when you describe them as “actively trying”. EA principles would imply that we draw billionaires’ attention to these risks, and encourage them to help in whatever ways are actually better in expectation.
Sure, I don’t think what you’re saying is technically incorrect it is just for me rhetorically, I would read you as being less sincere and therefore less convincing in engagement with critics if there seems to be some implication that comes across a bit like ‘unless people believe something stupid, then their critiques don’t make sense’ - but this may also be a reaction to seeing only the excerpted quote and not the whole text