Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means I’d probably end up feeling quite bad about the WIRED article; although possibly I’d end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, they’re not going to be ignorant of the basics. And besides any insights I might have got, I think there’s something healthy and virtuous about people being able to try on the perspective of “here’s how EA seems maybe flawed”—like even if the precise criticisms aren’t quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: I’m far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldn’t want to be sending people the message “hey this is right, you need to read it”. But I do feel good about sending the message “hey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if you’re trying to deepen your understanding of this EA thing, it’s worth a look (and also a look at rebuttals)”. Like I think it’s valuable to have something in the niche of “canonical external critique”, and maybe this isn’t in the top slot for that (I remember feeling good about Michael Nielsen’s notes), but I think it’s up there.
Yeah, I don’t particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). I’m largely reacting to your positive annotated comments about the WIRED piece.
That said, I really don’t think Wenar is (even close to) “substantively correct” on his “share of the total” argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). That’s silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenar’s confused reasoning about the impact of philanthropic donations.
Thanks for the link. (I’d much rather people read that than Wenar’s confused thoughts.)
Here’s the bit I take to represent the “core issue”:
If everyone thinks in terms of something like “approximate shares of moral credit”, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if they’d all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he’s (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I’m right about (i)-(iii), then I don’t think it’s accurate to characterize him as “in some reasonable way gesturing at the core issue.”
I guess I think it’s likely some middle ground? I don’t think he has a clear conceptual understanding of moral credit, but I do think he’s tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that’s motivated by some desire to make EA look bad—but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means I’d probably end up feeling quite bad about the WIRED article; although possibly I’d end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, they’re not going to be ignorant of the basics. And besides any insights I might have got, I think there’s something healthy and virtuous about people being able to try on the perspective of “here’s how EA seems maybe flawed”—like even if the precise criticisms aren’t quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: I’m far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldn’t want to be sending people the message “hey this is right, you need to read it”. But I do feel good about sending the message “hey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if you’re trying to deepen your understanding of this EA thing, it’s worth a look (and also a look at rebuttals)”. Like I think it’s valuable to have something in the niche of “canonical external critique”, and maybe this isn’t in the top slot for that (I remember feeling good about Michael Nielsen’s notes), but I think it’s up there.
Yeah, I don’t particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). I’m largely reacting to your positive annotated comments about the WIRED piece.
That said, I really don’t think Wenar is (even close to) “substantively correct” on his “share of the total” argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). That’s silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenar’s confused reasoning about the impact of philanthropic donations.
I agree that Wenar’s reasoning on this is confused, and that he doesn’t have a clear idea of how it’s supposed to work.
I do think that he’s in some reasonable way gesturing at the core issue, even if he doesn’t say very sensible things about how to address that issue.
And yeah, that’s the rough shape of the steelman position I have in mind. I wrote a little about my takes here; sorry I’ve not got anything more comprehensive: https://forum.effectivealtruism.org/posts/rWoT7mABXTfkCdHvr/jp-s-shortform?commentId=ArPTtZQbngqJ6KSMo
Thanks for the link. (I’d much rather people read that than Wenar’s confused thoughts.)
Here’s the bit I take to represent the “core issue”:
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he’s (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I’m right about (i)-(iii), then I don’t think it’s accurate to characterize him as “in some reasonable way gesturing at the core issue.”
I guess I think it’s likely some middle ground? I don’t think he has a clear conceptual understanding of moral credit, but I do think he’s tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that’s motivated by some desire to make EA look bad—but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.