This is more of a misread than a strawman, but on page 8 the paper says:
Sometimes the institutional critique is stated in ways that illegitimately presuppose that âcomplicityâ with suboptimal institutions entails net harm. For example, Adams, Crary, and Gruen (2023, xxv) write:
> EAâs principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to âdo the most good.â (emphasis added)
This reasoning is straightforwardly invalid. Itâs entirely possibleâindeed, plausibleâthat you may do the most good by supporting some structures that cause suffering. For one thing, even the best possible structuresâlike democracyâwill likely cause some suffering; it suffices that the alternatives are even worse. For another, even a suboptimal structure might be too costly, or too risky, to replace. But again, if thereâs evidence that current EA priorities are actually doing more harm than good, then thatâs precisely the sort of thing that EA principles are concerned with. So it makes literally no sense to express this as an external critique 10 (i.e. of the ideas, rather than their implementation).
I donât think saying that Adams, Crary, and Gruen âillegitimately presuppose that âcomplicityâ with suboptimal institutions entails net harmâ is correct. The paper misunderstands what they were saying. Hereâs the full sentence (emphasis added):
Taken together, the bookâs chapters show that in numerous interrelated areas of social justice workâincluding animal protection, antiracism, public health advocacy, poverty alleviation, community organizing, the running of animal sanctuaries, education, feminist and LGBTQ politics, and international advocacyâEAâs principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to âdo the most good.â
I interpret it as saying:
The way the EA movement/âcommunity/âprofessional network employs EA principles in practice fundamentally support and enable fundamental causes of suffering, which undermines EAâs ability to do the most good.
In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay.
But they never even try to argue that EA support for âthe very social structures that cause sufferingâ does more harm than good. As indicated by the âtherebyâ, they seem to take the mere fact of complicity to suffice for âundermining its efforst to âdo the most goodâ.â
I agree that theyâre talking about the way that EA principles are âactualizedâ. Theyâre empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. Iâm pointing out that this fact doesnât suffice. They need to further show that the complicity does more harm than good.
Effective altruism sounds so innocuousâwho could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. ⌠Every decent person should share the basic goals or values underlying effective altruism.
It starts here in the abstractâwriting this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. ⌠If it does not, then by their own lights they have no basis for thinking it a better option.
On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hillsâhigher maximaâout there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following. So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good dealâbut that does not mean that there is no basis for thinking attempting it is a good option. Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
Rare exceptions aside, most careers are presumably permissible. ⌠This claim is both true and widely neglected. ⌠Neither of these important truths is threatened by the deontologistâs claim that one should not pursue an impermissible career.
On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not âamong all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?â but âwhat is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?â, and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just âmore money = more goodâ is not a simple truth that just holds. And despite all these simplifications, the sentence âThis claim is ⌠trueâ just really, really gets to meâsuch binary language again completely sweeps any criticism, any debate, any nuance under the rug.
EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth ⌠Unless critics seriously want billionaires to deliberately try to do less good rather than more, itâs hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.
On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldnât exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)
So that is what I mean by âarguing against strawpeopleââclaims are so far simplified and/âor misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.
So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
Thatâs a non-sequitur. Thereâs no inconsistency between holding a certain conclusionâthat âevery decent person should share the basic goals or values underlying effective altruismââand âhonestly engaging with criticismsâ. I do both. (Specifically, I engage with criticisms of EA principles; Iâm very explicit that the paper is not concerned with criticisms of âEAâ as an entity.)
Iâve since reworded the abstract since the âevery decent personâ phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. Thatâs a view I hold, and Iâm happy to defend it. Youâre trying to assert that my conclusion is illegitimate or âdishonestâ, prior to even considering my supporting reasons, and thatâs frankly absurd.
The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hillsâhigher maximaâout there, but we do not know how to get there; any particular systemic change might as well make things worse.
Yes, and my âwhole pointâ is to respond to this by observing that oneâs total evidence either supports the gamble of moving in a different direction, or it does not. You donât seem to have understood my argument, which is fine (Iâm guessing you donât have much philosophy background), but it really should make you more cautious in your accusations.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
Itâs all about uncertaintyâthatâs what âin expectationâ refers to. Iâm certainly not attributing certainty to the proponent of systemic changeâthat would indeed be a strawperson, but itâs an egregious misreading to think that Iâm making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
the sentence âThis claim is ⌠trueâ just really, really gets to me
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesnât mean that theyâre failing to engage honestly with those who disagree with them.
So the question is not âamong all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?â
Now this is a straw man! The view I defend there is rather that âwe have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.â Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
The most common arguments I am aware of against billionaire philanthropists are...
Those arenât arguments against how EA principles apply to billionaires, so arenât relevant to my paper.
So that is what I mean by âarguing against strawpeopleâ
You didnât accurately identify any misrepresentations or fallacies in my paper. Itâs just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
Can you mention some places where you think he has strawmanned people and what you think the correct interpretation of them is?
This is more of a misread than a strawman, but on page 8 the paper says:
I donât think saying that Adams, Crary, and Gruen âillegitimately presuppose that âcomplicityâ with suboptimal institutions entails net harmâ is correct. The paper misunderstands what they were saying. Hereâs the full sentence (emphasis added):
I interpret it as saying:
In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay.
But they never even try to argue that EA support for âthe very social structures that cause sufferingâ does more harm than good. As indicated by the âtherebyâ, they seem to take the mere fact of complicity to suffice for âundermining its efforst to âdo the most goodâ.â
I agree that theyâre talking about the way that EA principles are âactualizedâ. Theyâre empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. Iâm pointing out that this fact doesnât suffice. They need to further show that the complicity does more harm than good.
Here is my criticism in more detail:
It starts here in the abstractâwriting this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hillsâhigher maximaâout there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following.
So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good dealâbut that does not mean that there is no basis for thinking attempting it is a good option.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not âamong all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?â but âwhat is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?â, and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just âmore money = more goodâ is not a simple truth that just holds.
And despite all these simplifications, the sentence âThis claim is ⌠trueâ just really, really gets to meâsuch binary language again completely sweeps any criticism, any debate, any nuance under the rug.
On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldnât exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)
So that is what I mean by âarguing against strawpeopleââclaims are so far simplified and/âor misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.
Thatâs a non-sequitur. Thereâs no inconsistency between holding a certain conclusionâthat âevery decent person should share the basic goals or values underlying effective altruismââand âhonestly engaging with criticismsâ. I do both. (Specifically, I engage with criticisms of EA principles; Iâm very explicit that the paper is not concerned with criticisms of âEAâ as an entity.)
Iâve since reworded the abstract since the âevery decent personâ phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. Thatâs a view I hold, and Iâm happy to defend it. Youâre trying to assert that my conclusion is illegitimate or âdishonestâ, prior to even considering my supporting reasons, and thatâs frankly absurd.
Yes, and my âwhole pointâ is to respond to this by observing that oneâs total evidence either supports the gamble of moving in a different direction, or it does not. You donât seem to have understood my argument, which is fine (Iâm guessing you donât have much philosophy background), but it really should make you more cautious in your accusations.
Itâs all about uncertaintyâthatâs what âin expectationâ refers to. Iâm certainly not attributing certainty to the proponent of systemic changeâthat would indeed be a strawperson, but itâs an egregious misreading to think that Iâm making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesnât mean that theyâre failing to engage honestly with those who disagree with them.
Now this is a straw man! The view I defend there is rather that âwe have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.â Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
Those arenât arguments against how EA principles apply to billionaires, so arenât relevant to my paper.
You didnât accurately identify any misrepresentations or fallacies in my paper. Itâs just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.