Can I ask what your connection to Leif is, are you in contact with him directly/âindirectly in some way?
I did use the term âepistemic nihilismâ as a turn of phrase, but I donât think itâs entirely unwarranted. I think the acid-test that Leifâs applying to GiveWell, if applied to literally any other choice, would lead that way. He certainly doesnât provide any grounding for any of the alternatives.
As much as Iâd also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/âmisrepresentations in the article[1] really doesnât make me see Leif as a good-faith interlocutor here.
Iâm just a student & a few weeks ago I emailed him asking to chat, which he kindly agreed to do. (It was basically a cold email after chatting with a friend about Poverty is No Pond.) We had a good conversation & he came across a very kind & genuine & we agreed to talk again next week (after spring break & this piece was published).
âAs much as Iâd also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/âmisrepresentations in the article really doesnât make me see Leif as a good-faith interlocutor here.â
This is really understandable, though my impression from talking with him is that he is actually thinking about all this in good-faith. I also found the piece unsatisfactory in that it didnât offer solutions, which is what I meant to allude to in saying âBut, really, Iâm interested in the follow-up piece...â
I think itâs really great you reached out to him, and I hope things are going well at Stanford and that youâre enjoying spring break :) And I think if youâre interested in pursuing his ideas, go and talk to him and donât necessarily feel like you have to ârepresent EAâ in any meaningful way.
I think Poverty is No Pond is a thoughtful piece of criticism, even if I disagree with some of the arguments/âconclusions in it. But The Deaths of Effective Altruism is a much worse piece imo, and I donât know how to square its incredible hostility with the picture of a genuine and good-faith person you talked about. Like some of it seems to come from a place of deep anger, and making simple mistakes or asking questions that could have been answered with some easy research or reflection.
I may raise some of these points more specifically in the âQuestions for Leif Postâ, but again I think you should ask your own questions rather than my own!
Reading through it, the vitriolic parts are mostly directed at MacAskill. The author seems to have an intense dislike for MacAskill specifically. He thinks MacAskill is a fraud/âidiot and is angry at him being so popular and influential. Personally, I donât think this hatred is justified, but I have similar feelings about other popular EA figures, so Iâm not sure I can judge that much.
I think if you ignore everything directed at MacAskill, it comes off as harsh but not excessively hostile, and while I disagree with plenty of whatâs in there, it does not come across as bad faith to me.
I cannot really speak to how good or honest Willâs public-facing stuff about practical charity evaluation is, and I find WWOTF a bit shallow outside of the really good chapter on population ethics where Will actually has domain expertise. But the claim that Will is hilariously incompetent as a philosopher is, frankly, garbage. As is the argument for it that Will once defined altruism in a non-standard way. Will regularly publishes in leading academic philosophy journals. He became the UK equivalent of a tenured prof super young at one of the worldâs best universities. Also, frankly, many years ago I actually discussed technical philosophy with Will once or twice, and, like most Oxford graduate students in philosophy, he knows what heâs doing.
I am still somewhat worried that Wenar has genuinely good criticism of GiveWell, but that part of the article was somewhat of a mark against itâs credibility to me even if all the other bad things it says about Will are true. (Note: Iâm not conceding they are true.)
Can I ask what your connection to Leif is, are you in contact with him directly/âindirectly in some way?
I did use the term âepistemic nihilismâ as a turn of phrase, but I donât think itâs entirely unwarranted. I think the acid-test that Leifâs applying to GiveWell, if applied to literally any other choice, would lead that way. He certainly doesnât provide any grounding for any of the alternatives.
As much as Iâd also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/âmisrepresentations in the article[1] really doesnât make me see Leif as a good-faith interlocutor here.
At least from my point-of-view. Theyâre either caused by his anger or even wilful misrepresentation.
Iâm just a student & a few weeks ago I emailed him asking to chat, which he kindly agreed to do. (It was basically a cold email after chatting with a friend about Poverty is No Pond.) We had a good conversation & he came across a very kind & genuine & we agreed to talk again next week (after spring break & this piece was published).
âAs much as Iâd also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/âmisrepresentations in the article really doesnât make me see Leif as a good-faith interlocutor here.â
This is really understandable, though my impression from talking with him is that he is actually thinking about all this in good-faith. I also found the piece unsatisfactory in that it didnât offer solutions, which is what I meant to allude to in saying âBut, really, Iâm interested in the follow-up piece...â
Thanks for sharing your thoughts, btw :)
I think itâs really great you reached out to him, and I hope things are going well at Stanford and that youâre enjoying spring break :) And I think if youâre interested in pursuing his ideas, go and talk to him and donât necessarily feel like you have to ârepresent EAâ in any meaningful way.
I think Poverty is No Pond is a thoughtful piece of criticism, even if I disagree with some of the arguments/âconclusions in it. But The Deaths of Effective Altruism is a much worse piece imo, and I donât know how to square its incredible hostility with the picture of a genuine and good-faith person you talked about. Like some of it seems to come from a place of deep anger, and making simple mistakes or asking questions that could have been answered with some easy research or reflection.
I may raise some of these points more specifically in the âQuestions for Leif Postâ, but again I think you should ask your own questions rather than my own!
Reading through it, the vitriolic parts are mostly directed at MacAskill. The author seems to have an intense dislike for MacAskill specifically. He thinks MacAskill is a fraud/âidiot and is angry at him being so popular and influential. Personally, I donât think this hatred is justified, but I have similar feelings about other popular EA figures, so Iâm not sure I can judge that much.
I think if you ignore everything directed at MacAskill, it comes off as harsh but not excessively hostile, and while I disagree with plenty of whatâs in there, it does not come across as bad faith to me.
I cannot really speak to how good or honest Willâs public-facing stuff about practical charity evaluation is, and I find WWOTF a bit shallow outside of the really good chapter on population ethics where Will actually has domain expertise. But the claim that Will is hilariously incompetent as a philosopher is, frankly, garbage. As is the argument for it that Will once defined altruism in a non-standard way. Will regularly publishes in leading academic philosophy journals. He became the UK equivalent of a tenured prof super young at one of the worldâs best universities. Also, frankly, many years ago I actually discussed technical philosophy with Will once or twice, and, like most Oxford graduate students in philosophy, he knows what heâs doing.
I am still somewhat worried that Wenar has genuinely good criticism of GiveWell, but that part of the article was somewhat of a mark against itâs credibility to me even if all the other bad things it says about Will are true. (Note: Iâm not conceding they are true.)