Thanks for the link. (I’d much rather people read that than Wenar’s confused thoughts.)
Here’s the bit I take to represent the “core issue”:
If everyone thinks in terms of something like “approximate shares of moral credit”, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if they’d all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he’s (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I’m right about (i)-(iii), then I don’t think it’s accurate to characterize him as “in some reasonable way gesturing at the core issue.”
I guess I think it’s likely some middle ground? I don’t think he has a clear conceptual understanding of moral credit, but I do think he’s tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that’s motivated by some desire to make EA look bad—but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
I agree that Wenar’s reasoning on this is confused, and that he doesn’t have a clear idea of how it’s supposed to work.
I do think that he’s in some reasonable way gesturing at the core issue, even if he doesn’t say very sensible things about how to address that issue.
And yeah, that’s the rough shape of the steelman position I have in mind. I wrote a little about my takes here; sorry I’ve not got anything more comprehensive: https://forum.effectivealtruism.org/posts/rWoT7mABXTfkCdHvr/jp-s-shortform?commentId=ArPTtZQbngqJ6KSMo
Thanks for the link. (I’d much rather people read that than Wenar’s confused thoughts.)
Here’s the bit I take to represent the “core issue”:
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he’s (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I’m right about (i)-(iii), then I don’t think it’s accurate to characterize him as “in some reasonable way gesturing at the core issue.”
I guess I think it’s likely some middle ground? I don’t think he has a clear conceptual understanding of moral credit, but I do think he’s tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that’s motivated by some desire to make EA look bad—but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.