Ok, I guess itās worth thinking about different audiences here. Something thatās largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you donāt risk getting sucked in by the nonsense, and might have new thoughts inspired by the āfreshnessā), while being extremely damaging to a general audience who take it at face value (wonāt think of the relevant āsteelmanā improvements), and have no exposure to, or understanding of, the āother sideā.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of āThis shows how we were right to dismiss EA all along!ā I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, Iād encourage you to resist any inference from āwell I found it pretty helpfulā to āit wasnāt awful.ā Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means Iād probably end up feeling quite bad about the WIRED article; although possibly Iād end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, theyāre not going to be ignorant of the basics. And besides any insights I might have got, I think thereās something healthy and virtuous about people being able to try on the perspective of āhereās how EA seems maybe flawedāālike even if the precise criticisms arenāt quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: Iām far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldnāt want to be sending people the message āhey this is right, you need to read itā. But I do feel good about sending the message āhey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if youāre trying to deepen your understanding of this EA thing, itās worth a look (and also a look at rebuttals)ā. Like I think itās valuable to have something in the niche of ācanonical external critiqueā, and maybe this isnāt in the top slot for that (I remember feeling good about Michael Nielsenās notes), but I think itās up there.
Yeah, I donāt particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). Iām largely reacting to your positive annotated comments about the WIRED piece.
That said, I really donāt think Wenar is (even close to) āsubstantively correctā on his āshare of the totalā argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). Thatās silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenarās confused reasoning about the impact of philanthropic donations.
I agree that Wenarās reasoning on this is confused, and that he doesnāt have a clear idea of how itās supposed to work.
I do think that heās in some reasonable way gesturing at the core issue, even if he doesnāt say very sensible things about how to address that issue.
Thanks for the link. (Iād much rather people read that than Wenarās confused thoughts.)
Hereās the bit I take to represent the ācore issueā:
If everyone thinks in terms of something like āapproximate shares of moral creditā, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if theyād all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that heās (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If Iām right about (i)-(iii), then I donāt think itās accurate to characterize him as āin some reasonable way gesturing at the core issue.ā
I guess I think itās likely some middle ground? I donāt think he has a clear conceptual understanding of moral credit, but I do think heās tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe thatās motivated by some desire to make EA look badābut so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
Ok, I guess itās worth thinking about different audiences here. Something thatās largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you donāt risk getting sucked in by the nonsense, and might have new thoughts inspired by the āfreshnessā), while being extremely damaging to a general audience who take it at face value (wonāt think of the relevant āsteelmanā improvements), and have no exposure to, or understanding of, the āother sideā.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of āThis shows how we were right to dismiss EA all along!ā I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, Iād encourage you to resist any inference from āwell I found it pretty helpfulā to āit wasnāt awful.ā Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means Iād probably end up feeling quite bad about the WIRED article; although possibly Iād end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, theyāre not going to be ignorant of the basics. And besides any insights I might have got, I think thereās something healthy and virtuous about people being able to try on the perspective of āhereās how EA seems maybe flawedāālike even if the precise criticisms arenāt quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: Iām far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldnāt want to be sending people the message āhey this is right, you need to read itā. But I do feel good about sending the message āhey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if youāre trying to deepen your understanding of this EA thing, itās worth a look (and also a look at rebuttals)ā. Like I think itās valuable to have something in the niche of ācanonical external critiqueā, and maybe this isnāt in the top slot for that (I remember feeling good about Michael Nielsenās notes), but I think itās up there.
Yeah, I donāt particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). Iām largely reacting to your positive annotated comments about the WIRED piece.
That said, I really donāt think Wenar is (even close to) āsubstantively correctā on his āshare of the totalā argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). Thatās silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenarās confused reasoning about the impact of philanthropic donations.
I agree that Wenarās reasoning on this is confused, and that he doesnāt have a clear idea of how itās supposed to work.
I do think that heās in some reasonable way gesturing at the core issue, even if he doesnāt say very sensible things about how to address that issue.
And yeah, thatās the rough shape of the steelman position I have in mind. I wrote a little about my takes here; sorry Iāve not got anything more comprehensive: https://āāforum.effectivealtruism.org/āāposts/āārWoT7mABXTfkCdHvr/āājp-s-shortform?commentId=ArPTtZQbngqJ6KSMo
Thanks for the link. (Iād much rather people read that than Wenarās confused thoughts.)
Hereās the bit I take to represent the ācore issueā:
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that heās (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If Iām right about (i)-(iii), then I donāt think itās accurate to characterize him as āin some reasonable way gesturing at the core issue.ā
I guess I think itās likely some middle ground? I donāt think he has a clear conceptual understanding of moral credit, but I do think heās tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe thatās motivated by some desire to make EA look badābut so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.