So this is devaluing the position of a hypothetical someone opposing EA, rather than honestly engaging with their criticisms.
That’s a non-sequitur. There’s no inconsistency between holding a certain conclusion—that “every decent person should share the basic goals or values underlying effective altruism”—and “honestly engaging with criticisms”. I do both. (Specifically, I engage with criticisms of EA principles; I’m very explicit that the paper is not concerned with criticisms of “EA” as an entity.)
I’ve since reworded the abstract since the “every decent person” phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That’s a view I hold, and I’m happy to defend it. You’re trying to assert that my conclusion is illegitimate or “dishonest”, prior to even considering my supporting reasons, and that’s frankly absurd.
The whole point is that systemic change is very hard to estimate. It is like sitting on a local maximum of awesomeness, and we know that there must be higher hills—higher maxima—out there, but we do not know how to get there; any particular systemic change might as well make things worse.
Yes, and my “whole point” is to respond to this by observing that one’s total evidence either supports the gamble of moving in a different direction, or it does not. You don’t seem to have understood my argument, which is fine (I’m guessing you don’t have much philosophy background), but it really should make you more cautious in your accusations.
Or, more clearly: By not mentioning uncertainty in this paragraph, I do believe you are arguing against a strawperson, as the presence of uncertainty is absolutely crucial to the argument.
It’s all about uncertainty—that’s what “in expectation” refers to. I’m certainly not attributing certainty to the proponent of systemic change—that would indeed be a strawperson, but it’s an egregious misreading to think that I’m making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
the sentence “This claim is … true” just really, really gets to me
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn’t mean that they’re failing to engage honestly with those who disagree with them.
So the question is not “among all of these completely equivalent permissible options, should I choose the highest-paying one and earn to give?”
Now this is a straw man! The view I defend there is rather that “we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.” Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
The most common arguments I am aware of against billionaire philanthropists are...
Those aren’t arguments against how EA principles apply to billionaires, so aren’t relevant to my paper.
So that is what I mean by “arguing against strawpeople”
You didn’t accurately identify any misrepresentations or fallacies in my paper. It’s just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.
That’s a non-sequitur. There’s no inconsistency between holding a certain conclusion—that “every decent person should share the basic goals or values underlying effective altruism”—and “honestly engaging with criticisms”. I do both. (Specifically, I engage with criticisms of EA principles; I’m very explicit that the paper is not concerned with criticisms of “EA” as an entity.)
I’ve since reworded the abstract since the “every decent person” phrasing seems to rub people the wrong way. But it is my honest view. EA principles = beneficentrism, and rejecting beneficentrism is morally indecent. That’s a view I hold, and I’m happy to defend it. You’re trying to assert that my conclusion is illegitimate or “dishonest”, prior to even considering my supporting reasons, and that’s frankly absurd.
Yes, and my “whole point” is to respond to this by observing that one’s total evidence either supports the gamble of moving in a different direction, or it does not. You don’t seem to have understood my argument, which is fine (I’m guessing you don’t have much philosophy background), but it really should make you more cautious in your accusations.
It’s all about uncertainty—that’s what “in expectation” refers to. I’m certainly not attributing certainty to the proponent of systemic change—that would indeed be a strawperson, but it’s an egregious misreading to think that I’m making any such misattribution. (Especially since the immediately preceding paragraphs were discussing uncertainty, explicitly and at length!)
Again, I think this is just a result of your not being familiar with the norms of philosophy. Philosophers talk about true claims all the time, and it doesn’t mean that they’re failing to engage honestly with those who disagree with them.
Now this is a straw man! The view I defend there is rather that “we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings.” Reasons always need to be balanced against countervailing reasons. The point of the appeal to permissibility is just to allow that some careers may be ruled out as a matter of deontic constraints. But obviously more moderate harms also need to be considered, and balanced against the benefits, and I never suggest otherwise.
Those aren’t arguments against how EA principles apply to billionaires, so aren’t relevant to my paper.
You didn’t accurately identify any misrepresentations or fallacies in my paper. It’s just a mix of (i) antecedently disliking the strength of my conclusion, (ii) not understanding philosophy, and (iii) your being more interested in a different topic than what my paper addresses.