Definitely not all of them, but most EAs are extremely rich guys who aren’t donating any of their money.
AnonymousTurtle
GiveWell and Open Philanthropy just made a $1.5M grant to Malengo!
Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views
No, but in expectation it wasn’t very far from the stock market valuation. I think it’s very possible that it was positive EV even if it didn’t work out
I think the only thing Imma might be “median” in is weekly work hours, which I don’t think is what the poster meant. Most people couldn’t do these things
I agree with some of this comment and disagree with other parts:
“people who initially set up Givewell, did the research and conivnced Dustin to donate his money did a truly amazing jop”
AFAIK Dustin would have donated a roughly similar amount anyway, at least after Gates levels of cost-effectiveness, so I don’t think EA gets any credit for that (unless you include Dustin in EA, which you don’t seem to do)
“The EA leadership has fucked up a bunch of stuff. Many ‘elite EAs’ were not part of the parts of EA that went well.” I agree, but I think we’re probably thinking of different parts of EA
“‘Think for yourself about how to make the world better and then do it (assuming its not insane)’ is probably both going to be better for you and better for the world” I agree with this, but I would be careful about where your thoughts are coming from
I agree with the examples, but for the record I think it’s very misleading to claim Imma is a “mediocre EA”.
If I understand correctly, she moved to a different country so she could donate more, which enables her to donate a lot with her “normal” tech job (much more than the median EA). Before that, she helped kickstart the now booming Dutch EA community, and helped with “Doing Good Better” (she’s in the credits)
My understanding is that she’s not giving millions every year or founding charities, but she still did much more than a “median EA” would be able to
Like with Wytham Abbey, I’m really surprised by people in this thread confusing investments with donations.
If SBF had invested some billions in Twitter, the money wouldn’t be burned, see e.g. what happened with Anthropic.
From his (and most people’s) perspective, SBF was running FTX with ~1% the employees of comparable platforms, so it seemed plausible he could buy Twitter, cut 90% of the workforce like Musk did, and make money while at the same time steering it to be more scout-mindset and truth-seeking oriented.
r/philosophy response: https://old.reddit.com/r/philosophy/comments/1bw3ok2/the_deaths_of_effective_altruism_wired_march_2024/
to what extent was the ongoing death of effective altruism, as this article puts it, caused by the various problems it inherited from utilitarianism? The inability to effectively quantify human wellbeing, for instance, or the ways in which Singer’s drowning child analogy (a foundation of EA) seems to discount the possibility that some people (say, children that we have brought into the world) might have special moral claims on us that other people do not.
Don’t think it’s really because of its philosophical consequences. EA as an organization was super corrupt and suspicious. That’s why it’s falling apart. Like it quickly went from “buy the best mosquito net” to “make sure AI doesn’t wipe out humanity”. Oh and also let’s buy a castle as EA headquarters. Its motivations quickly shifted from charity work to prostelyzation.
Most of its issues seem to fundamentally lie in the fact that it’s an organization run by wealthy, privileged people that use “rationality” to justify their actions.
https://old.reddit.com/r/slatestarcodex/comments/1brg5t3/the_deaths_of_effective_altruism/kx91f5k/ Scott Alexander response to the Leif Wenar article
AnonymousTurtle’s Quick takes
Sam Harris and William MacAskill on SBF & EA
The Shrimp You Can Save
Actually, all EA orgs should just rename to “The Shrimps You Can Save”
Their criticism of EA is precisely that they think EAs can’t see people far away as “real, flesh-and-blood human”, just numbers in a spreadsheet.
Yes, I’m accusing them of precisely the thing they are accusing EA of.
To me it’s clearly not a coincidence that all three of them are not recommending to stop using numbers or spreadsheets, but they are proposing to donate to “real” humans that you have a relationship with.
following it with “and that’s why donating money to people far away is problematic!” makes no sense
I think it makes complete sense if they don’t think these people are real people, or their responsibility.
Tucker dismisses charity to “people he’s never met and never will meet”, Schiller is more reasonable but says that it’s really important to “have a relationship” with beneficiaries, Wenar brings as a positive example the surfer who donates to his friends.
If either of them endorsed donating to people in a low income country who you don’t have a relationship with, I would be wrong.
Thank you for clarifying!
I will note that the original comment still has positive upvotes
I (and others) have strongly upvoted it because (especially post-FTX[1]) it’s important to encourage people to share concerns about unethical behavior from influential people in the ecosystem, it’s not an indication of agreement.
Agree-votes do convey a lot of information, and I’m surprised that nobody else is defending this position in the comments, given 7 people agree with you.
I found one of the examples here very unpersuasive: I read this report years ago and I distinctly remember it was very clear that it was meant to “get a quick sense of things”, only had a few hours of research behind it, and wasn’t meant to pass any kind of rigorous research. It was the first thing I read about animal welfare and it was enlightening, I’m grateful that they published it. Here is the first paragraph:
After spending considerable time on creating the best system we could for evaluating animal welfare, we applied this system to 15 different animals/breeds. This included 6 types of wild animal and 7 types of farm animal environments, as well as 2 human conditions for baseline comparisons. This was far from a complete list, but it gave us enough information to get a sense of the different conditions. Each report was limited to 2-5 hours with pre-set evaluation criteria (as seen in this post), a 1-page summary, and a section of rough notes (generally in the 5-10 page range). Each summary report was read by 8 raters (3 from the internal CE research team, 5 external to the CE team). The average weightings and ranges in the spreadsheet below are generated by averaging the assessments of these raters.
(I am not affiliated with CE, but it would be important for me to know if their research was bad)
- ^
and, less so, post-OCB, post-Leverage, post-CFAR, …
- ^
In general, I think it’s important to separate EA as in the idea from EA as in “a specific group of people”. You might hate billionaires, MacAskill and GiveWell, but the equal consideration of similar interests can still be an important concept.
Just because you never met them, it doesn’t mean that people like GiveDirectly recipients are not “real, flesh-and-blood human”, who experience joys and sorrows as much as you do, and have a family or friends just as much as you have.
Tucker Carlson when writing a similar critique of effective altruism even used “people” in scare quotes to indicate how sub-human he considers charity beneficiaries to be, just because they happened to be born in a different country and never meet a rich person. Amy Schiller says that people you don’t have a relationship with are just “abstract objects”.
I see EA as going against that, acting on the belief that we are all real people, who don’t matter less if we happen to be born in a low income country with no beaches.
As for your questions:
Do folks agree EA’s shortfalls form a pattern & are not one off incidents? (And, if so, what are those shortfalls?)
Yeah folks agree that EA has many shortfalls, to the point that people write about Criticism of Criticism of Criticism. Some people say that EA focuses too much on the data, and ignores non-RCT sources of information and more ambitious change, other people say that it focuses too much on speculative interventions that are not backed by data, based on arbitrary “priors”. Some say that it doesn’t give enough to non-human animals, some say it shouldn’t give anything to non-human animals.
Also, in general anything can call itself “EA”, and some projects that have been associated with “EA” are going to be bad just on base rates.
2. How can we (as individuals or collectively) update or reform / what ought we do differently in light of them?
I’d guess it depends on your goals. I think donating more money is increasingly valuable if you think the existing donors are doing a bad job at it. (Especially if you have the income of a Stanford Professor)
CE has a fairly strong reputation of being hostile / non-collaborative
Could you elaborate on “being hostile”? Do they have a reputation for causing harm, or is it just about not listening to feedback?
What do you donate to?
What is your take on GiveDirectly?
Do you think Mariam is not a “real, flesh-and-blood human”, since you never met her?
Do you think that spending money surfing and travelling the world while millions are starving could be considered by some a suboptimal use of capital?
METR ‘Model Evaluation & Threat Research’ might also be worth mentioning. I wonder if there’s a list of capability evaluations projects somewhere
I think mainstream HR comes primarily from the private sector and is primarily about protecting the employer, often against the employee. They often cast themselves in a role of being there to help you, but a common piece of folk wisdom is “HR is not your friend”. I think frankly that a lot of mainstream HR culture is at worst dishonest and manipulative, and I’d be really sad to see us uncritically importing more of that.
I see a lot of this online, but it doesn’t match my personal experience. People working in HR that I’ve been in contact with seem generally kind people, aware of tradeoffs, and generally care about the wellbeing of employees.
I worry that the online reputation of HR departments is shaped by a minority of terrible experiences, and we overgeneralize that to think that HR cannot or will not help, while in my experience they are often really eager to try to help (in part because they don’t want you and others to quit, but also because they are nice people).
Maybe it’s also related to minimum-wage non-skilled jobs vs higher paying jobs, where employment tends to be less adversarial and less exploitative.
https://forum.effectivealtruism.org/posts/nb6tQ5MRRpXydJQFq/ea-survey-2020-series-donation-data#Donation_and_income_for_recent_years, and personal conversations which make me suspect the assumption of non-respondents donating as much as respondents is excessively generous.
Not donating any of their money is definitely an exaggeration, but it’s not more than the median rich person https://www.philanthropyroundtable.org/almanac/statistics-on-u-s-generosity/