To voters: Please remember that the downvote button is not a disagreevote button. Under the circumstances here, it is mainly a reduce visibility / push off the homepage button. This is an attempt by an important critic to engage with the community. Although my reaction to the letter is mixed, I—like Owen—think it is offered in good faith and has some important points to make. I would hesitate before making it harder for others to encounter and decide for themselves whether to read. I would particularly hesitate before downvoting primarily because you didn’t like the Wired article (I didn’t either) rather than on the merits of this open letter.
To readers: In my view, more of the value is nearer the end, so if you’re short on time / feeling frustrated by the first parts, you might skip ahead to “Feedback from reality” on page 12 of 17 rather than jettisoning the whole document. I’d also say that the content gradually moves away from / beyond what I recall of the Wired article and into more practical advice for young EAs.
[I accidentally pressed ‘Comment’ before a previous version of this comment was finished; I have deleted that comment.]
Please remember that the downvote button is not a disagreevote button. Under the circumstances here, it is mainly a reduce visibility / push off the homepage button
I would encourage voters to vote based on your views about the merits of the letter, rather than the effects on its visibility. In general, I think voting based on effects is a kind of “naive consequentialism”, which has worse consequences than voting based on merit when the effects of voting are properly accounted for.
I think “judging on quality” is not quite the right standard. Especially for criticisms of EA from people outside EA.
I think generally people in EA will be able to hear and benefit from criticisms that are expressed by someone in EA and knows how to frame them in ways that gel with the general worldview. On the other hand I think it’s reasonable on priors to expect there to exist external critics who are failing to perceive some important things that EA gets right, but who nonetheless manage to perceive some important things that EA is missing or getting wrong.
If everyone on the EA forum judges “is this high quality?”, it’s natural for them to assess that on the dimensions that they have a good grasp of—so they’ll see the critic making mistakes and be inclined to dismiss it. The points it might be important for them to hear from the critics will be less obvious, since they’re a bit more alien to the EA ontology. But this is liable to contribute to an echo chamber. And just as at a personal level I think the most valuable feedback you can get is often of the form “hey you seem to not be tracking dimension X at all”, and it can initially seem annoying or missing the point but then turns out to be super helpful in retrospect, so I think at the group level that EA could really do with being able to sit with external criticism—feel into where the critic is coming from, and where they might be tracking something important, even if they’re making other big mistakes. So I’d rather judge on something closer to “is this saying potentially important things that haven’t been hashed to death?”.
(Note that minutes after posting my comment, I replaced “quality” with “merit”, because it better expresses what I was trying to communicate. However, I don’t think this makes a substantial difference to the point you are raising.)
I think that you managed to articulate clearly what I think is the strongest reason for a certain attitude I see among some EAs, which involves applying different standards to external criticism than to stuff written by members of our community.
Empirically, however, what you say doesn’t ring true to me. My impression is that EA has made progress over time primarily by a process of collective discussion with other EAs, as well as “EA-adjacent” folk like rationalists and forecasters, rather than external critics in the reference class Wenar instantiates. In my opinion, the net effect of such external criticism has, in fact, probably been negative: it has often created polarization and tribalism within the EA community, of the sort that makes it more difficult for us to make intellectual progress, and has misallocated precious community attention, which has in turn slowed down that progress.
So, I’d summarize my position as follows: “Yes, it may be reasonable on priors to expect there to exist critics who can see important problems with EA but who may not be able to articulate that criticism in a way that resonates with us. But our posterior, when we also factor in the evidence supplied by the social and intellectual history of EA, is that there is not much to be gained from engaging with that criticism (criticism that doesn’t seem valuable on its merits), and there is in fact a risk of harm in the form of wasting time and energy on unproductive and acrimonious discussion.”
OK, I don’t feel a particular desire to fight you on the posterior.
But I do feel a desire to fight you on this particular case. I re-read the letter, and I think there’s actually a bunch of great stuff in there, and I think a bunch of people would benefit from reading and thinking about it. I’ve made an annotated version here, where I include my comments about various parts of what seems valuable or misguided.
And then I feel bad about whatever policy people are following that is leading this to attract so many downvotes.
I’m really surprised you’re so positive towards his ‘share of the total’ assumptions (like, he seems completely unaware of Parfit’s refutation, and is pushing the 1st mistake in a very naive way, not anything like the “for purposes of co-ordination” steelman that you seem to have in mind). And I’m especially baffled that you had a positive view of his nearest test. This was at the heart of my critique of his WIRED article:
Emphasizing minor, outweighed costs of good things (e.g. vaccines) is a classic form that [moral misdirection] can take… People are very prone to status-quo bias, and averse to salient harms. If you go out of your way to make harms from action extra-salient, while ignoring (far greater) harms from inaction, this will very predictably lead to worse decisions… Note that his “dearest test” does not involve vividly imagining your dearest ones suffering harm as a result of your inaction; only action. Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.
Can you explain what advantage Wenar’s biased test has over the more universal imaginative exercises recommended by R.M. Hare and others?
[P.S. I agree that the piece as a whole probably shouldn’t have negative karma, but I wouldn’t want it to have high karma either; it doesn’t strike me as worth positively recommending.]
Ok hmm I notice that I’m not especially keen to defend him on the details of any of his views, and my claim is more like “well I found it pretty helpful to read”.
Like: I agree that he doesn’t show awareness of Parfit, but think that he’s pushing a position which (numbers aside) is substantively correct in this particular case, and I hadn’t noticed that.
On the nearest test: I’ve not considered this in contrast to other imaginative exercises. I do think you should do a version without an action/inaction asymmetry. But I liked something about the grounding nature of the exercise, and I thought it was well chosen to prompt EAs to try to do that in connection with important decisions, when I think culturally there can be a risk of getting caught up in abstractions, in ways that may mean we fail to track things we know at some level.
Ok, I guess it’s worth thinking about different audiences here. Something that’s largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you don’t risk getting sucked in by the nonsense, and might have new thoughts inspired by the ‘freshness’), while being extremely damaging to a general audience who take it at face value (won’t think of the relevant ‘steelman’ improvements), and have no exposure to, or understanding of, the “other side”.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of “This shows how we were right to dismiss EA all along!” I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, I’d encourage you to resist any inference from “well I found it pretty helpful” to “it wasn’t awful.” Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means I’d probably end up feeling quite bad about the WIRED article; although possibly I’d end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, they’re not going to be ignorant of the basics. And besides any insights I might have got, I think there’s something healthy and virtuous about people being able to try on the perspective of “here’s how EA seems maybe flawed”—like even if the precise criticisms aren’t quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: I’m far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldn’t want to be sending people the message “hey this is right, you need to read it”. But I do feel good about sending the message “hey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if you’re trying to deepen your understanding of this EA thing, it’s worth a look (and also a look at rebuttals)”. Like I think it’s valuable to have something in the niche of “canonical external critique”, and maybe this isn’t in the top slot for that (I remember feeling good about Michael Nielsen’s notes), but I think it’s up there.
Yeah, I don’t particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). I’m largely reacting to your positive annotated comments about the WIRED piece.
That said, I really don’t think Wenar is (even close to) “substantively correct” on his “share of the total” argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). That’s silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenar’s confused reasoning about the impact of philanthropic donations.
Thanks for the link. (I’d much rather people read that than Wenar’s confused thoughts.)
Here’s the bit I take to represent the “core issue”:
If everyone thinks in terms of something like “approximate shares of moral credit”, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if they’d all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he’s (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I’m right about (i)-(iii), then I don’t think it’s accurate to characterize him as “in some reasonable way gesturing at the core issue.”
I guess I think it’s likely some middle ground? I don’t think he has a clear conceptual understanding of moral credit, but I do think he’s tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that’s motivated by some desire to make EA look bad—but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
It may be useful to consider whether you think your comment would pass a reversal test: if the roles were reversed and it was an EA criticizing another movement, but the criticism was otherwise comparable (e.g. in tone and content), would you also have expressed a broadly positive opinion about it? If yes, that would suggest we are disagreeing about the merits of the letter. If no, it seems it’s primarily a disagreement about the standards we should adopt when evaluating external criticism.
Yes, I’d be broadly positive about it. I might say something like “I know you’re trying to break through so people can hear you, but I think you’re being a little unnecessarily antagonistic. Also I think you’re making a number of mistakes about their movement (or about what’s actually good). I sort of wish you’d been careful to avoid more of those. But despite all that I think this contains a number of pretty insightful takes, and you will be making a gift to them in offering it if they can get past the tone and the errors to appreciate it. I hope they do.
Update: I think I’d actually be less positive on it than this if I thought their antagonism might splash back on other people.
I took that not to be a relevant part of the hypothetical, but actually I’m not so sure. I think for people in the community, it’s creating a public good (for the community) to police their mistakes, so I’m not inclined to let error-filled things slide for the sake of the positives. For people outside the community, I’m not so invested in building up the social fabric, so it doesn’t seem worth trying to punish the errors, so the right move seems to be something like more straightforwardly looking for the good bits.
This seems strange to me. Jason is literally right that—especially at the near-zero level of karma—voting is mechanically a mechanism for deciding how much the post should be promoted or whether it should be hidden entirely. One could argue that there are second order social effects of taking that into account, but that’s a much more speculative argument.
I have no strong view on how much this post should be favoured by the sorting algorithm, but I am strongly against it being hidden. If nothing else, people might want to refer to it later whether they like or dislike it, as an extended piece of criticism from a prominent source.
To voters: Please remember that the downvote button is not a disagreevote button. Under the circumstances here, it is mainly a reduce visibility / push off the homepage button. This is an attempt by an important critic to engage with the community. Although my reaction to the letter is mixed, I—like Owen—think it is offered in good faith and has some important points to make. I would hesitate before making it harder for others to encounter and decide for themselves whether to read. I would particularly hesitate before downvoting primarily because you didn’t like the Wired article (I didn’t either) rather than on the merits of this open letter.
To readers: In my view, more of the value is nearer the end, so if you’re short on time / feeling frustrated by the first parts, you might skip ahead to “Feedback from reality” on page 12 of 17 rather than jettisoning the whole document. I’d also say that the content gradually moves away from / beyond what I recall of the Wired article and into more practical advice for young EAs.
[I accidentally pressed ‘Comment’ before a previous version of this comment was finished; I have deleted that comment.]
I would encourage voters to vote based on your views about the merits of the letter, rather than the effects on its visibility. In general, I think voting based on effects is a kind of “naive consequentialism”, which has worse consequences than voting based on merit when the effects of voting are properly accounted for.
I think “judging on quality” is not quite the right standard. Especially for criticisms of EA from people outside EA.
I think generally people in EA will be able to hear and benefit from criticisms that are expressed by someone in EA and knows how to frame them in ways that gel with the general worldview. On the other hand I think it’s reasonable on priors to expect there to exist external critics who are failing to perceive some important things that EA gets right, but who nonetheless manage to perceive some important things that EA is missing or getting wrong.
If everyone on the EA forum judges “is this high quality?”, it’s natural for them to assess that on the dimensions that they have a good grasp of—so they’ll see the critic making mistakes and be inclined to dismiss it. The points it might be important for them to hear from the critics will be less obvious, since they’re a bit more alien to the EA ontology. But this is liable to contribute to an echo chamber. And just as at a personal level I think the most valuable feedback you can get is often of the form “hey you seem to not be tracking dimension X at all”, and it can initially seem annoying or missing the point but then turns out to be super helpful in retrospect, so I think at the group level that EA could really do with being able to sit with external criticism—feel into where the critic is coming from, and where they might be tracking something important, even if they’re making other big mistakes. So I’d rather judge on something closer to “is this saying potentially important things that haven’t been hashed to death?”.
(Note that minutes after posting my comment, I replaced “quality” with “merit”, because it better expresses what I was trying to communicate. However, I don’t think this makes a substantial difference to the point you are raising.)
I think that you managed to articulate clearly what I think is the strongest reason for a certain attitude I see among some EAs, which involves applying different standards to external criticism than to stuff written by members of our community.
Empirically, however, what you say doesn’t ring true to me. My impression is that EA has made progress over time primarily by a process of collective discussion with other EAs, as well as “EA-adjacent” folk like rationalists and forecasters, rather than external critics in the reference class Wenar instantiates. In my opinion, the net effect of such external criticism has, in fact, probably been negative: it has often created polarization and tribalism within the EA community, of the sort that makes it more difficult for us to make intellectual progress, and has misallocated precious community attention, which has in turn slowed down that progress.
So, I’d summarize my position as follows: “Yes, it may be reasonable on priors to expect there to exist critics who can see important problems with EA but who may not be able to articulate that criticism in a way that resonates with us. But our posterior, when we also factor in the evidence supplied by the social and intellectual history of EA, is that there is not much to be gained from engaging with that criticism (criticism that doesn’t seem valuable on its merits), and there is in fact a risk of harm in the form of wasting time and energy on unproductive and acrimonious discussion.”
OK, I don’t feel a particular desire to fight you on the posterior.
But I do feel a desire to fight you on this particular case. I re-read the letter, and I think there’s actually a bunch of great stuff in there, and I think a bunch of people would benefit from reading and thinking about it. I’ve made an annotated version here, where I include my comments about various parts of what seems valuable or misguided.
And then I feel bad about whatever policy people are following that is leading this to attract so many downvotes.
I’m really surprised you’re so positive towards his ‘share of the total’ assumptions (like, he seems completely unaware of Parfit’s refutation, and is pushing the 1st mistake in a very naive way, not anything like the “for purposes of co-ordination” steelman that you seem to have in mind). And I’m especially baffled that you had a positive view of his nearest test. This was at the heart of my critique of his WIRED article:
Can you explain what advantage Wenar’s biased test has over the more universal imaginative exercises recommended by R.M. Hare and others?
[P.S. I agree that the piece as a whole probably shouldn’t have negative karma, but I wouldn’t want it to have high karma either; it doesn’t strike me as worth positively recommending.]
Ok hmm I notice that I’m not especially keen to defend him on the details of any of his views, and my claim is more like “well I found it pretty helpful to read”.
Like: I agree that he doesn’t show awareness of Parfit, but think that he’s pushing a position which (numbers aside) is substantively correct in this particular case, and I hadn’t noticed that.
On the nearest test: I’ve not considered this in contrast to other imaginative exercises. I do think you should do a version without an action/inaction asymmetry. But I liked something about the grounding nature of the exercise, and I thought it was well chosen to prompt EAs to try to do that in connection with important decisions, when I think culturally there can be a risk of getting caught up in abstractions, in ways that may mean we fail to track things we know at some level.
Ok, I guess it’s worth thinking about different audiences here. Something that’s largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you don’t risk getting sucked in by the nonsense, and might have new thoughts inspired by the ‘freshness’), while being extremely damaging to a general audience who take it at face value (won’t think of the relevant ‘steelman’ improvements), and have no exposure to, or understanding of, the “other side”.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of “This shows how we were right to dismiss EA all along!” I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, I’d encourage you to resist any inference from “well I found it pretty helpful” to “it wasn’t awful.” Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means I’d probably end up feeling quite bad about the WIRED article; although possibly I’d end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, they’re not going to be ignorant of the basics. And besides any insights I might have got, I think there’s something healthy and virtuous about people being able to try on the perspective of “here’s how EA seems maybe flawed”—like even if the precise criticisms aren’t quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: I’m far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldn’t want to be sending people the message “hey this is right, you need to read it”. But I do feel good about sending the message “hey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if you’re trying to deepen your understanding of this EA thing, it’s worth a look (and also a look at rebuttals)”. Like I think it’s valuable to have something in the niche of “canonical external critique”, and maybe this isn’t in the top slot for that (I remember feeling good about Michael Nielsen’s notes), but I think it’s up there.
Yeah, I don’t particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). I’m largely reacting to your positive annotated comments about the WIRED piece.
That said, I really don’t think Wenar is (even close to) “substantively correct” on his “share of the total” argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). That’s silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenar’s confused reasoning about the impact of philanthropic donations.
I agree that Wenar’s reasoning on this is confused, and that he doesn’t have a clear idea of how it’s supposed to work.
I do think that he’s in some reasonable way gesturing at the core issue, even if he doesn’t say very sensible things about how to address that issue.
And yeah, that’s the rough shape of the steelman position I have in mind. I wrote a little about my takes here; sorry I’ve not got anything more comprehensive: https://forum.effectivealtruism.org/posts/rWoT7mABXTfkCdHvr/jp-s-shortform?commentId=ArPTtZQbngqJ6KSMo
Thanks for the link. (I’d much rather people read that than Wenar’s confused thoughts.)
Here’s the bit I take to represent the “core issue”:
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that he’s (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If I’m right about (i)-(iii), then I don’t think it’s accurate to characterize him as “in some reasonable way gesturing at the core issue.”
I guess I think it’s likely some middle ground? I don’t think he has a clear conceptual understanding of moral credit, but I do think he’s tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe that’s motivated by some desire to make EA look bad—but so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
Thanks for the useful exchange.
It may be useful to consider whether you think your comment would pass a reversal test: if the roles were reversed and it was an EA criticizing another movement, but the criticism was otherwise comparable (e.g. in tone and content), would you also have expressed a broadly positive opinion about it? If yes, that would suggest we are disagreeing about the merits of the letter. If no, it seems it’s primarily a disagreement about the standards we should adopt when evaluating external criticism.
Yes, I’d be broadly positive about it. I might say something like “I know you’re trying to break through so people can hear you, but I think you’re being a little unnecessarily antagonistic. Also I think you’re making a number of mistakes about their movement (or about what’s actually good). I sort of wish you’d been careful to avoid more of those. But despite all that I think this contains a number of pretty insightful takes, and you will be making a gift to them in offering it if they can get past the tone and the errors to appreciate it. I hope they do.
Update: I think I’d actually be less positive on it than this if I thought their antagonism might splash back on other people.
I took that not to be a relevant part of the hypothetical, but actually I’m not so sure. I think for people in the community, it’s creating a public good (for the community) to police their mistakes, so I’m not inclined to let error-filled things slide for the sake of the positives. For people outside the community, I’m not so invested in building up the social fabric, so it doesn’t seem worth trying to punish the errors, so the right move seems to be something like more straightforwardly looking for the good bits.
This seems strange to me. Jason is literally right that—especially at the near-zero level of karma—voting is mechanically a mechanism for deciding how much the post should be promoted or whether it should be hidden entirely. One could argue that there are second order social effects of taking that into account, but that’s a much more speculative argument.
I have no strong view on how much this post should be favoured by the sorting algorithm, but I am strongly against it being hidden. If nothing else, people might want to refer to it later whether they like or dislike it, as an extended piece of criticism from a prominent source.