Depending on your goals, I feel like a podcast might not be the most appropriate format for this deep dive on a sensitive topic. One reason is that I would expect many of the most informed people might prefer to only share information anonymously.
AnonymousTurtle
It’s possible to work at OpenAI and care about safety without being friends with CEA staff though.
It doesn’t seem that anyone OpenAI besides the EA community is too worried, which to me is a positive update.
EA folks most often single out to express frustration & disappointment about
I’ve never seen anyone express frustration or disappointment about Dustin, except for Habryka. However, Habryka seems to be frustrated with most people who fund anything / do anything that’s not associated with Lightcone and its affiliates, so I don’t know if he should count as expressing frustration at Dustin in particular.Are you including things like criticizing OP for departing grantees too quickly and for not departing grantees quickly enough, or do you have something else in mind?
I view Dustin and OP as quite separate, especially before Holden’s departure, so that might also explain our different experience.
She has a 2-year-old EA forum account https://forum.effectivealtruism.org/users/gretchen-krueger-1 , has written reports in 2020 and in 2021 where most (all?) co-authors are in the EA community, is mentioned in this post, and is Facebook friends with at least one CEA employee
They all seem related to the EA community, and for many it’s not clear if they left or were fired.
Besides Ilya Sutskever, is there any person not related to the EA community who quit or was fired from OpenAI for safety concerns?
https://forum.effectivealtruism.org/posts/nb6tQ5MRRpXydJQFq/ea-survey-2020-series-donation-data#Donation_and_income_for_recent_years, and personal conversations which make me suspect the assumption of non-respondents donating as much as respondents is excessively generous.
Not donating any of their money is definitely an exaggeration, but it’s not more than the median rich person https://www.philanthropyroundtable.org/almanac/statistics-on-u-s-generosity/
Definitely not all of them, but most EAs are extremely rich guys who aren’t donating any of their money.
GiveWell and Open Philanthropy just made a $1.5M grant to Malengo!
Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views
No, but in expectation it wasn’t very far from the stock market valuation. It’s very possible that it was positive EV even if it didn’t work out
I think the only thing Imma might be “median” in is weekly work hours, which I don’t think is what the poster meant. Most people couldn’t do these things
I agree with some of this comment and disagree with other parts:
“people who initially set up Givewell, did the research and conivnced Dustin to donate his money did a truly amazing jop”
AFAIK Dustin would have donated a roughly similar amount anyway, at least at Gates levels of cost-effectiveness, so I don’t think EA gets any credit for that (unless you include Dustin in EA, which you don’t seem to do)
“The EA leadership has fucked up a bunch of stuff. Many ‘elite EAs’ were not part of the parts of EA that went well.” I agree, but I think we’re probably thinking of different parts of EA
“‘Think for yourself about how to make the world better and then do it (assuming its not insane)’ is probably both going to be better for you and better for the world” I agree with this, but I would be careful about where your thoughts are coming from
I agree with the examples, but for the record I think it’s very misleading to claim Imma is a “mediocre EA”.
If I understand correctly, she moved to a different country so she could donate more, which enables her to donate a lot with her “normal” tech job (much more than the median EA). Before that, she helped kickstart the now booming Dutch EA community, and helped with “Doing Good Better” (she’s in the credits)
My understanding is that she’s not giving millions every year or founding charities, but she still did much more than a “median EA” would be able to
Like with Wytham Abbey, I’m really surprised by people in this thread confusing investments with donations.
If SBF had invested some billions in Twitter, the money wouldn’t be burned, see e.g. what happened with Anthropic.
From his (and most people’s) perspective, SBF was running FTX with ~1% the employees of comparable platforms, so it seemed plausible he could buy Twitter, cut 90% of the workforce like Musk did, and make money while at the same time steering it to be more scout-mindset and truth-seeking oriented.
r/philosophy response: https://old.reddit.com/r/philosophy/comments/1bw3ok2/the_deaths_of_effective_altruism_wired_march_2024/
to what extent was the ongoing death of effective altruism, as this article puts it, caused by the various problems it inherited from utilitarianism? The inability to effectively quantify human wellbeing, for instance, or the ways in which Singer’s drowning child analogy (a foundation of EA) seems to discount the possibility that some people (say, children that we have brought into the world) might have special moral claims on us that other people do not.
Don’t think it’s really because of its philosophical consequences. EA as an organization was super corrupt and suspicious. That’s why it’s falling apart. Like it quickly went from “buy the best mosquito net” to “make sure AI doesn’t wipe out humanity”. Oh and also let’s buy a castle as EA headquarters. Its motivations quickly shifted from charity work to prostelyzation.
Most of its issues seem to fundamentally lie in the fact that it’s an organization run by wealthy, privileged people that use “rationality” to justify their actions.
https://old.reddit.com/r/slatestarcodex/comments/1brg5t3/the_deaths_of_effective_altruism/kx91f5k/ Scott Alexander response to the Leif Wenar article
The Shrimp You Can Save
Actually, all EA orgs should just rename to “The Shrimps You Can Save”
Their criticism of EA is precisely that they think EAs can’t see people far away as “real, flesh-and-blood human”, just numbers in a spreadsheet.
Yes, I’m accusing them of precisely the thing they are accusing EA of.
To me it’s clearly not a coincidence that all three of them are not recommending to stop using numbers or spreadsheets, but they are proposing to donate to “real” humans that you have a relationship with.
following it with “and that’s why donating money to people far away is problematic!” makes no sense
I think it makes complete sense if they don’t think these people are real people, or their responsibility.
Tucker dismisses charity to “people he’s never met and never will meet”, Schiller is more reasonable but says that it’s really important to “have a relationship” with beneficiaries, Wenar brings as a positive example the surfer who donates to his friends.
If either of them endorsed donating to people in a low income country who you don’t have a relationship with, I would be wrong.
Another important caveat is that the criticisms you mention are not common from people evaluating the effective altruism framework from the outside when allocating their donations or orienting their careers.
The criticisms you mention come from people who have spent a lot of time in the community, and usually (but not exclusively) from those of us who have been rejected from job applications, denied funding, or had bad social experiences/cultural fit with the social community.
This doesn’t necessarily make them less valid, but seems to be a meaningfully different topic from what this post is about. Someone altruistically deciding how much money to give to which charity is unlikely to be worried about whether they will be seduced into believing that they would be cherished members of a community.
People evaluating effective altruism “from the outside” instead mention things like the paternalism and unintended consequences, that it doesn’t care about biodiversity, that quantification is perilous, that socialism is better, or that capitalism is better.
Note that I do agree with many of your criticisms of the community[1], but I believe it’s important to remember that the vast majority of people evaluating effective altruism are not in the EA social community and don’t care much about it, and we should probably flag our potential bias when criticizing an organization after being denied funding or rejected from it (while still expressing that useful criticism.)
I would also add Ben Kuhn’s “pretending to try” critique from 11 years ago, which I assume shares some points with your unpublished “My experience with a Potemkin Effective Altruism group”