Iâm really surprised youâre so positive towards his âshare of the totalâ assumptions (like, he seems completely unaware of Parfitâs refutation, and is pushing the 1st mistake in a very naive way, not anything like the âfor purposes of co-ordinationâ steelman that you seem to have in mind). And Iâm especially baffled that you had a positive view of his nearest test. This was at the heart of my critique of his WIRED article:
Emphasizing minor, outweighed costs of good things (e.g. vaccines) is a classic form that [moral misdirection] can take⊠People are very prone to status-quo bias, and averse to salient harms. If you go out of your way to make harms from action extra-salient, while ignoring (far greater) harms from inaction, this will very predictably lead to worse decisions⊠Note that his âdearest testâ does not involve vividly imagining your dearest ones suffering harm as a result of your inaction; only action. Wenar is here promoting a general approach to practical reasoning that is systematically biased (and predictably harmful as a result): a plain force for ill in the world.
Can you explain what advantage Wenarâs biased test has over the more universal imaginative exercises recommended by R.M. Hare and others?
[P.S. I agree that the piece as a whole probably shouldnât have negative karma, but I wouldnât want it to have high karma either; it doesnât strike me as worth positively recommending.]
Ok hmm I notice that Iâm not especially keen to defend him on the details of any of his views, and my claim is more like âwell I found it pretty helpful to readâ.
Like: I agree that he doesnât show awareness of Parfit, but think that heâs pushing a position which (numbers aside) is substantively correct in this particular case, and I hadnât noticed that.
On the nearest test: Iâve not considered this in contrast to other imaginative exercises. I do think you should do a version without an action/âinaction asymmetry. But I liked something about the grounding nature of the exercise, and I thought it was well chosen to prompt EAs to try to do that in connection with important decisions, when I think culturally there can be a risk of getting caught up in abstractions, in ways that may mean we fail to track things we know at some level.
Ok, I guess itâs worth thinking about different audiences here. Something thatâs largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you donât risk getting sucked in by the nonsense, and might have new thoughts inspired by the âfreshnessâ), while being extremely damaging to a general audience who take it at face value (wonât think of the relevant âsteelmanâ improvements), and have no exposure to, or understanding of, the âother sideâ.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of âThis shows how we were right to dismiss EA all along!â I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, Iâd encourage you to resist any inference from âwell I found it pretty helpfulâ to âit wasnât awful.â Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means Iâd probably end up feeling quite bad about the WIRED article; although possibly Iâd end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, theyâre not going to be ignorant of the basics. And besides any insights I might have got, I think thereâs something healthy and virtuous about people being able to try on the perspective of âhereâs how EA seems maybe flawedââlike even if the precise criticisms arenât quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: Iâm far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldnât want to be sending people the message âhey this is right, you need to read itâ. But I do feel good about sending the message âhey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if youâre trying to deepen your understanding of this EA thing, itâs worth a look (and also a look at rebuttals)â. Like I think itâs valuable to have something in the niche of âcanonical external critiqueâ, and maybe this isnât in the top slot for that (I remember feeling good about Michael Nielsenâs notes), but I think itâs up there.
Yeah, I donât particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). Iâm largely reacting to your positive annotated comments about the WIRED piece.
That said, I really donât think Wenar is (even close to) âsubstantively correctâ on his âshare of the totalâ argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). Thatâs silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenarâs confused reasoning about the impact of philanthropic donations.
I agree that Wenarâs reasoning on this is confused, and that he doesnât have a clear idea of how itâs supposed to work.
I do think that heâs in some reasonable way gesturing at the core issue, even if he doesnât say very sensible things about how to address that issue.
Thanks for the link. (Iâd much rather people read that than Wenarâs confused thoughts.)
Hereâs the bit I take to represent the âcore issueâ:
If everyone thinks in terms of something like âapproximate shares of moral creditâ, then this can help in coordinating to avoid situations where a lot of people work on a project because it seems worth it on marginal impact, but it would have been better if theyâd all done something different.
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that heâs (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If Iâm right about (i)-(iii), then I donât think itâs accurate to characterize him as âin some reasonable way gesturing at the core issue.â
I guess I think itâs likely some middle ground? I donât think he has a clear conceptual understanding of moral credit, but I do think heâs tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe thatâs motivated by some desire to make EA look badâbut so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.
Iâm really surprised youâre so positive towards his âshare of the totalâ assumptions (like, he seems completely unaware of Parfitâs refutation, and is pushing the 1st mistake in a very naive way, not anything like the âfor purposes of co-ordinationâ steelman that you seem to have in mind). And Iâm especially baffled that you had a positive view of his nearest test. This was at the heart of my critique of his WIRED article:
Can you explain what advantage Wenarâs biased test has over the more universal imaginative exercises recommended by R.M. Hare and others?
[P.S. I agree that the piece as a whole probably shouldnât have negative karma, but I wouldnât want it to have high karma either; it doesnât strike me as worth positively recommending.]
Ok hmm I notice that Iâm not especially keen to defend him on the details of any of his views, and my claim is more like âwell I found it pretty helpful to readâ.
Like: I agree that he doesnât show awareness of Parfit, but think that heâs pushing a position which (numbers aside) is substantively correct in this particular case, and I hadnât noticed that.
On the nearest test: Iâve not considered this in contrast to other imaginative exercises. I do think you should do a version without an action/âinaction asymmetry. But I liked something about the grounding nature of the exercise, and I thought it was well chosen to prompt EAs to try to do that in connection with important decisions, when I think culturally there can be a risk of getting caught up in abstractions, in ways that may mean we fail to track things we know at some level.
Ok, I guess itâs worth thinking about different audiences here. Something thatâs largely tendentious nonsense but includes something of a fresh (for you) perspective could be overall epistemically beneficial for you (since you donât risk getting sucked in by the nonsense, and might have new thoughts inspired by the âfreshnessâ), while being extremely damaging to a general audience who take it at face value (wonât think of the relevant âsteelmanâ improvements), and have no exposure to, or understanding of, the âother sideâ.
I saw a bunch of prominent academic philosophers sharing the WIRED article with a strong vibe of âThis shows how we were right to dismiss EA all along!â I can only imagine what a warped impression the typical magazine reader would have gotten from it. The anti-GiveWell stuff, especially, struck me as incredibly reckless and irresponsible for an academic to write for a general audience (for the reasons I set out in my critique). So, at least with regard to the WIRED article, Iâd encourage you to resist any inference from âwell I found it pretty helpfulâ to âit wasnât awful.â Smart people can have helpful thoughts sparked by awful, badly-reasoned texts!
Yeah, I agree that audience matters. I would feel bad about these articles being one of the few exposures someone had to EA. (Which means Iâd probably end up feeling quite bad about the WIRED article; although possibly Iâd end up thinking it was productive in advancing the conversation by giving voice to concerns that many people already felt, even if those concerns ended up substantively incorrect.)
But this letter is targeted at young people in EA. By assumption, theyâre not going to be ignorant of the basics. And besides any insights I might have got, I think thereâs something healthy and virtuous about people being able to try on the perspective of âhereâs how EA seems maybe flawedââlike even if the precise criticisms arenât quite right, it could help open people to noticing subtle but related flaws. And I think the emotional register of the piece is kind of good for that purpose?
To be clear: Iâm far from an unmitigated fan of the letter. I disagree with the conclusions, but even keeping those fixed there are a ton of changes that would make me happier with it overall. I wouldnât want to be sending people the message âhey this is right, you need to read itâ. But I do feel good about sending the message âhey this has some interesting perspectives, and this also covers reasons why some smart caring people get off the train; if youâre trying to deepen your understanding of this EA thing, itâs worth a look (and also a look at rebuttals)â. Like I think itâs valuable to have something in the niche of âcanonical external critiqueâ, and maybe this isnât in the top slot for that (I remember feeling good about Michael Nielsenâs notes), but I think itâs up there.
Yeah, I donât particularly mind this letter (though I see a lot more value in the critiques from Nielsen, NunoSempere, and Benjamin Ross Hoffman). Iâm largely reacting to your positive annotated comments about the WIRED piece.
That said, I really donât think Wenar is (even close to) âsubstantively correctâ on his âshare of the totalâ argument. The context is debating how much good EA-inspired donations have done. He seems to think the answer should be discounted by all the other (non-EA?) people involved in the causal chain, or that maybe only the final step should count (!?). Thatâs silly. The relevant question is counterfactual. When co-ordinating with others, you might want to assess a collective counterfactual rather than an individual counterfactual, to avoid double-counting (I take it that something along these lines is your intended steelman?); but that seems pretty distant from Wenarâs confused reasoning about the impact of philanthropic donations.
I agree that Wenarâs reasoning on this is confused, and that he doesnât have a clear idea of how itâs supposed to work.
I do think that heâs in some reasonable way gesturing at the core issue, even if he doesnât say very sensible things about how to address that issue.
And yeah, thatâs the rough shape of the steelman position I have in mind. I wrote a little about my takes here; sorry Iâve not got anything more comprehensive: https://ââforum.effectivealtruism.org/ââposts/âârWoT7mABXTfkCdHvr/ââjp-s-shortform?commentId=ArPTtZQbngqJ6KSMo
Thanks for the link. (Iâd much rather people read that than Wenarâs confused thoughts.)
Hereâs the bit I take to represent the âcore issueâ:
Can you point to textual evidence that Wenar is actually gesturing at anything remotely in this vicinity? The alternative interpretation (which I think is better supported by the actual text) is that heâs (i) conceptually confused about moral credit in a way that is deeply unreasonable, (ii) thinking about how to discredit EA, not how to optimize coordination, and (iii) simply happened to say something that vaguely reminds you of your own, much more reasonable, take.
If Iâm right about (i)-(iii), then I donât think itâs accurate to characterize him as âin some reasonable way gesturing at the core issue.â
I guess I think itâs likely some middle ground? I donât think he has a clear conceptual understanding of moral credit, but I do think heâs tuning in to ways in which EA claims may be exaggerating the impact people can have. I find it quite easy to believe thatâs motivated by some desire to make EA look badâbut so what? If people who want to make EA look bad make for good researchers hunting for (potentially-substantive) issues, so much the better.