When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far.
Unfortunately, I have to agree with Paul—both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.)
Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes—which some EAers might agree with, but certainly not all.
This is more of a misread than a strawman, but on page 8 the paper says:
Sometimes the institutional critique is stated in ways that illegitimately presuppose that “complicity” with suboptimal institutions entails net harm. For example, Adams, Crary, and Gruen (2023, xxv) write:
> EA’s principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.” (emphasis added)
This reasoning is straightforwardly invalid. It’s entirely possible—indeed, plausible—that you may do the most good by supporting some structures that cause suffering. For one thing, even the best possible structures—like democracy—will likely cause some suffering; it suffices that the alternatives are even worse. For another, even a suboptimal structure might be too costly, or too risky, to replace. But again, if there’s evidence that current EA priorities are actually doing more harm than good, then that’s precisely the sort of thing that EA principles are concerned with. So it makes literally no sense to express this as an external critique 10 (i.e. of the ideas, rather than their implementation).
I don’t think saying that Adams, Crary, and Gruen “illegitimately presuppose that “complicity” with suboptimal institutions entails net harm” is correct. The paper misunderstands what they were saying. Here’s the full sentence (emphasis added):
Taken together, the book’s chapters show that in numerous interrelated areas of social justice work—including animal protection, antiracism, public health advocacy, poverty alleviation, community organizing, the running of animal sanctuaries, education, feminist and LGBTQ politics, and international advocacy—EA’s principles are actualized in ways that support some of the very social structures that cause suffering, thereby undermining its efforts to “do the most good.”
I interpret it as saying:
The way the EA movement/community/professional network employs EA principles in practice fundamentally support and enable fundamental causes of suffering, which undermines EA’s ability to do the most good.
In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay.
But they never even try to argue that EA support for “the very social structures that cause suffering” does more harm than good. As indicated by the “thereby”, they seem to take the mere fact of complicity to suffice for “undermining its efforst to ‘do the most good’.”
I agree that they’re talking about the way that EA principles are “actualized”. They’re empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. I’m pointing out that this fact doesn’t suffice. They need to further show that the complicity does more harm than good.
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. … Every decent person should share the basic goals or values underlying effective altruism.
It starts here in the abstract—writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. … If it does not, then by their own lights they have no basis for thinking it a better option.
On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following. So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal—but that does not mean that there is no basis for thinking attempting it is a good option. Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
Rare exceptions aside, most careers are presumably permissible. … This claim is both true and widely neglected. … Neither of these important truths is threatened by the deontologist’s claim that one should not pursue an impermissible career.
On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?” but “what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?”, and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just “more money = more good” is not a simple truth that just holds. And despite all these simplifications, the sentence “This claim is … true” just really, really gets to me—such binary language again completely sweeps any criticism, any debate, any nuance under the rug.
EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth … Unless critics seriously want billionaires to deliberately try to do less good rather than more, it’s hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.
On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldn’t exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)
So that is what I mean by “arguing against strawpeople”—claims are so far simplified and/or misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.
So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
That’s a non-sequitur. There’s no inconsistency between holding a certain conclusion—that “every decent person should share the basic goals or values underlying effective altruism”—and “honestly engaging with criticisms”. I do both. (Specifically, I engage with criticisms of EA principles; I’m very explicit that the paper is not concerned with criticisms of “EA” as an entity.)
I’ve since reworded the abstract since the “every decent person” phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That’s a view I hold, and I’m happy to defend it. You’re trying to assert that my conclusion is illegitimate or “dishonest”, prior to even considering my supporting reasons, and that’s frankly absurd.
The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse.
Yes, and my “whole point” is to respond to this by observing that one’s total evidence either supports the gamble of moving in a different direction, or it does not. You don’t seem to have understood my argument, which is fine (I’m guessing you don’t have much philosophy background), but it really should make you more cautious in your accusations.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
It’s all about uncertainty—that’s what “in expectation” refers to. I’m certainly not attributing certainty to the proponent of systemic change—that would indeed be a strawperson, but it’s an egregious misreading to think that I’m making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
the sentence “This claim is … true” just really, really gets to me
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn’t mean that they’re failing to engage honestly with those who disagree with them.
So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?”
Now this is a straw man! The view I defend there is rather that “we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.” Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
The most common arguments I am aware of against billionaire philanthropists are...
Those aren’t arguments against how EA principles apply to billionaires, so aren’t relevant to my paper.
So that is what I mean by “arguing against strawpeople”
You didn’t accurately identify any misrepresentations or fallacies in my paper. It’s just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes—which some EAers might agree with, but certainly not all.
This criticism suggests that you have not understood the point of the paper. I’m defending the coreideas behind EA. It’s just a basic logical point that defending EA principles as such does not require defending the more specific views of particular EAs.
In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct
This is far too vague to be helpful (and so comes off as gratuitously insulting). What instances? Which of my specific counterarguments do you find unpersuasive, and why? I do indeed conclude that the core principles of EA are undeniably correct. I never claim that any specific causes EAs “advocate for” are even correct at all, let alone obviously so.
I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best
I agree with that methodological claim. (I flag the brevity just to indicate that there is, of course, always more that could be said. But I wouldn’t say what I do if I didn’t think it was productive and important, even in its brief form.) I believe that I made convincing arguments that go beyond “vaguely pointing… and hoping for the best.” Perhaps you could apply this same methodological principle to your own comments.
I understand that my vague criticism was unhelpful; sadly, when posting I did not have enough time to really point out specific instances, and thought it would still be higher value to mention it in general than to just not write anything at all.
I will try to find the time now to write down my criticisms in more detail, and once I am ready will comment then on the question of Dr. David Mathers above, as he also asked for it (and by commenting here and there, you both will be notified. Hooray.)
When seeing the title of this post I really wanted to like it, and I appreciate the effort that went into it all so far.
Unfortunately, I have to agree with Paul—both the post as well as the paper draft itself read pretty weak to me. In many instances, it seems that you argue against strawpeople rather than engaging with criticism of EA in good faith, and even worse, the arguments you use to counter the criticism boil down to what EA is advocating for “obviously” being correct (you wrote in the post that the arguments are very much shortened because there is just so much ground to cover, but I believe that if an argument cannot be made in a convincing way, we should either focus more time on making it properly, or dropping the discussion entirely, rather than just vaguely pointing towards something and hoping for the best.)
Also, you seem to not defend all of EA, but whatever part of EA that is most easily defendable in the particular paragraph, such as arguing that EA does not require people to always follow its moral implications, only sometimes—which some EAers might agree with, but certainly not all.
Can you mention some places where you think he has strawmanned people and what you think the correct interpretation of them is?
This is more of a misread than a strawman, but on page 8 the paper says:
I don’t think saying that Adams, Crary, and Gruen “illegitimately presuppose that “complicity” with suboptimal institutions entails net harm” is correct. The paper misunderstands what they were saying. Here’s the full sentence (emphasis added):
I interpret it as saying:
In other words, it is an empirical claim that the way EA is carried out in practice has some counterproductive results. It is not a normative claim about whether complicity with suboptimal institutions is ever okay.
But they never even try to argue that EA support for “the very social structures that cause suffering” does more harm than good. As indicated by the “thereby”, they seem to take the mere fact of complicity to suffice for “undermining its efforst to ‘do the most good’.”
I agree that they’re talking about the way that EA principles are “actualized”. They’re empirically actualized in ways that involve complicity with suboptimal institutions. And the way these authors argue, they take this fact to suffice for critique. I’m pointing out that this fact doesn’t suffice. They need to further show that the complicity does more harm than good.
Here is my criticism in more detail:
It starts here in the abstract—writing this way immediately sounds condescending to me, making disagreement with EA sound like an entirely unreasonable affair. So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
On systemic change: The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse. But if EA principles told us to only ever sit at this local maximum and never even attempt to go anywhere else, then those would not be principles I would be happy following.
So yes, people who support systemic change often do not have the mathematical basis to argue that it necessarily will be a good deal—but that does not mean that there is no basis for thinking attempting it is a good option.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
On earning to give: Again, the arguments are very simplified here. A career being permissible or not is not a binary choice, true or false. It is a gradient, and it fluctuates and evolves over time, depending on how what you are asked to do on the job fluctuates over time, and depending on how the ambient morality of yourself and society shifts over time. So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?” but “what is the tradeoff I should be willing to make between the career being more morally iffy, and the positive impact I can have by donation from a larger income baseline?”, and additionally, if you still just donate e.g. 10% of your income but your income is higher it means that also there is a larger amount of money you do not donate, which counterfactually you might use to buy things you do not actually need that need to be produced and shipped and so on, in the worse case making the world a worse place for everyone to be in, so even just “more money = more good” is not a simple truth that just holds.
And despite all these simplifications, the sentence “This claim is … true” just really, really gets to me—such binary language again completely sweeps any criticism, any debate, any nuance under the rug.
On billionaire philanthropy: Yes, billionaires are capable of doing immense good, and again, I have not seen anyone actually arguing against that. The most common arguments I am aware of against billionaire philanthropists are (1) that billionaires in the first place just shouldn’t exist, as yes they have the capacity to do immense good, but also the capacity to do immense harm, and no single person should be allowed to have the capacity to do so much harm to living beings on a whim. And (2) billionaires are capable of paying people to advise them on how to best make it look like they are doing good, when actually, they are not (such as creating huge charitable foundations and equipping them with lots of money, but these foundations then actually just re-investing that money into projects run by companies these billionaires have shares in, etc.)
So that is what I mean by “arguing against strawpeople”—claims are so far simplified and/or misrepresented that they do not accurately represent the actual positions of EAers, or of people who criticise them.
That’s a non-sequitur. There’s no inconsistency between holding a certain conclusion—that “every decent person should share the basic goals or values underlying effective altruism”—and “honestly engaging with criticisms”. I do both. (Specifically, I engage with criticisms of EA principles; I’m very explicit that the paper is not concerned with criticisms of “EA” as an entity.)
I’ve since reworded the abstract since the “every decent person” phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That’s a view I hold, and I’m happy to defend it. You’re trying to assert that my conclusion is illegitimate or “dishonest”, prior to even considering my supporting reasons, and that’s frankly absurd.
Yes, and my “whole point” is to respond to this by observing that one’s total evidence either supports the gamble of moving in a different direction, or it does not. You don’t seem to have understood my argument, which is fine (I’m guessing you don’t have much philosophy background), but it really should make you more cautious in your accusations.
It’s all about uncertainty—that’s what “in expectation” refers to. I’m certainly not attributing certainty to the proponent of systemic change—that would indeed be a strawperson, but it’s an egregious misreading to think that I’m making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn’t mean that they’re failing to engage honestly with those who disagree with them.
Now this is a straw man! The view I defend there is rather that “we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.” Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
Those aren’t arguments against how EA principles apply to billionaires, so aren’t relevant to my paper.
You didn’t accurately identify any misrepresentations or fallacies in my paper. It’s just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
This criticism suggests that you have not understood the point of the paper. I’m defending the core ideas behind EA. It’s just a basic logical point that defending EA principles as such does not require defending the more specific views of particular EAs.
This is far too vague to be helpful (and so comes off as gratuitously insulting). What instances? Which of my specific counterarguments do you find unpersuasive, and why? I do indeed conclude that the core principles of EA are undeniably correct. I never claim that any specific causes EAs “advocate for” are even correct at all, let alone obviously so.
I agree with that methodological claim. (I flag the brevity just to indicate that there is, of course, always more that could be said. But I wouldn’t say what I do if I didn’t think it was productive and important, even in its brief form.) I believe that I made convincing arguments that go beyond “vaguely pointing… and hoping for the best.” Perhaps you could apply this same methodological principle to your own comments.
I understand that my vague criticism was unhelpful; sadly, when posting I did not have enough time to really point out specific instances, and thought it would still be higher value to mention it in general than to just not write anything at all.
I will try to find the time now to write down my criticisms in more detail, and once I am ready will comment then on the question of Dr. David Mathers above, as he also asked for it (and by commenting here and there, you both will be notified. Hooray.)