Same, Oscar! I hope to ask him about this
Earthling
I don’t agree that you need a separate number for lives lost as for lives saved, but I had always implicitly assumed that ‘lives saved’ was a net calculation.
Interesting! I think the question of whether 1 QALY saved (in expectation) is canceled out by the loss of 1 QALY (in expectation) is a complicated question. I tend to think there’s an asymmetry between how good well-being is & how bad suffering is, though my views on this have oscillated a lot over the years. I’d like GiveWell to keep the tallies separate because I’d prefer to make the moral judgement depending on my current take on this asymmetry, rather than have them default to saying it’s 1:1.
Appreciate the question, Larks, & wish I’d noted this initially!
(Aside/caveat: I’m a bit pressed for time so not responding as fully as I’d like but I’ll do my best to make time to expand in the coming days.)
“great”
he does a great job critiquing EA as a whole & showing the shortfalls are not isolated incidents.
I think a lot of criticisms of EA as applied highlight specific incidents of a miscalculation or a person who did something objectionable. But I think Leif made an effort to show these shortfalls to be a pattern, as opposed to one of incidents. And, as a result, I’m currently trying to figure out if there is indeed a pattern of shortcomings, what those patterns are, and how to update or reform or what to do in light of them.
I’m tentatively leaning toward thinking there are some patterns, thanks to Leif and others, but I feel pretty clueless about the last bit (updates/reforms/actions).
“thoughtful”
Leif Wenar thoughtfully critiqued EA in “Poverty is No Pond” (2011)
Technically, “thoughtfully” was in reference to Poverty is No Pond. :) The above re pattern of shortcomings was the main reason I linked the piece. And, more importantly, I want to brainstorm with y’all (& Leif) how to update or reform or what to do in light of any patterns of shortcomings.
I do think the article’s style & snark undercuts Leif’s underlying thoughtfulness. When I chatted with him (just once for an hour) a few weeks ago, he showed the upmost kindness, earnestness, & thoughtfulness, with no snark (though I was aware that this post would be tonally different).
Unrequested rhetorical analysis: All the snark does make me feel his primary rhetorical purpose was to discourage talented, thoughtful, well-intentioned young people from engaging with EA, as opposed to change the minds of those already engaging with EA (& likely frequenting this forum). idk maybe I’ll come to endorse this aim in the future but in the past I definitely haven’t, as evidenced by the hundreds of hours I’ve spent community building.
So, to clarify, discouraging awesome people from engaging with EA was not my rhetorical purpose in this linkpost. Rather, it was to spark a discussion & brainstorm with y’all about:
Do folks agree EA’s shortfalls form a pattern & are not one off incidents? (And, if so, what are those shortfalls?)
How can we (as individuals or collectively) update or reform / what ought we do differently in light of them?
I’m super curious why this was downvoted. If you’re open to sharing your thoughts, I’d love to hear them here or dm me
I’m just a student & a few weeks ago I emailed him asking to chat, which he kindly agreed to do. (It was basically a cold email after chatting with a friend about Poverty is No Pond.) We had a good conversation & he came across a very kind & genuine & we agreed to talk again next week (after spring break & this piece was published).
“As much as I’d also be keen for dialogue and improvement, the level of vitriol combined with flat-out mistakes/misrepresentations in the article really doesn’t make me see Leif as a good-faith interlocutor here.”
This is really understandable, though my impression from talking with him is that he is actually thinking about all this in good-faith. I also found the piece unsatisfactory in that it didn’t offer solutions, which is what I meant to allude to in saying “But, really, I’m interested in the follow-up piece...”
Thanks for sharing your thoughts, btw :)
My understanding is he’s not at all an advocate for epistemic nihilism (nor just basing decisions on anecdotes like those he shared). (Though the post leaves me a little epistemically depressed.) I think he (like me) thinks we can do better & in the post is arguing that EA is not managing to do better. And, my impression is he is genuinely trying to figure out how we can do better.
Share your questions for Leif here
Epistemic status: Off-hand, not carefully thought-through comment
There aren’t a lot of hours of research going into “EA’s climate recommended charities, to the extent this is a thing. So for climate stuff there are a lot of specific orgs but on a meta-level large donors should probably chat with
Jonathan Foley at Project Drawdown (& heed Drawdown’s work but not their rankings which are widely acknowledged to be a marketing gimmick)
Climate Works
The Prime Coalition ($25 mm minimum as of 2020; investment fund but optimizing for impact iirc)
Breakthrough Energy Ventures (similar deal to Prime)
Probably also check out what the Gates Foundation is funding in this area.
And, be mindful that the space is changing so fast that recs from a year ago may no longer apply (which is a problem when so few people are looking into high-impact giving opportunities)
Personally, the Rainforest Trust seems good.
[Question] What are your Qs for Leif Wenar?
If you haven’t read it already, you might find “Poverty is No Pond” (2011) interesting. He discusses his critiques EA’s approach to global development & GiveWell in more detail.
[Linkpost] Leif Wenar’s The Deaths of Effective Altruism
Leif Wenar
[Question] How independent is the research coming out of OpenAI’s preparedness team?
[Linkpost] Beware the Squirrel by Verity Harding
[Question] Is EA’s capital invested as well as top endowments?
I think you make some excellent points. Thanks for writing this.
Thank you so much! I’ve been busy the last few weeks & totally missed this post. Really appreciate you sharing it
I’m afraid I’m not being clear; does the below at all address your uncertainty?
I just added this edit: Should EA dedicate some resources to reducing our uncertainty on this question? (“This question” referring to would/how much would x-risk increase if the western world was run by Viktor Orbans.)
Hey Noah, Thanks for your comment. I guess I was asking less about the impact democratic leaders have on climate policy and more about how a warmer, less democratic world could increase the likelihood of an x-risk. Do you have any thoughts on how the things mentioned in your comment may impact x-risks?
I don’t have a link on hand but Matt Levine has some great articles explaining the collapse of FTX. Perhaps check out some of the stuff he wrote in November 2022 when the news was just breaking for background & then more recent stuff for a fuller picture.
I also initially misinterpreted the title so maybe consider renaming it something like “Can someone explain the FTX fraud?”