Quick thoughts on Effective Altruism and a response to Dr. Sarah Taber’s good thread calling EA a cult (which it functionally is).
I have believed EA thinking for about 8 years, and continue to do so despite the scandals. HOWEVER, I agree fully with this part of “Doing EA Better” [2], which I’ll call “the critique” from now on. : “Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights”.
As a leftist who wants to see the end of capitalism, cops, climate change, wealth inequality, and the dooming of entire nations to death and despair to uphold a white supremacist order, I am not particularly attached to EA or its specific GiveWell top charities. I think EA works best when it’s only an attempt to be “less wrong”, and always iterating through a SCIENTIFIC process.
Without having spent much time in the weeds, I think there’s a strong moral case that a GiveWell cause (say, mosquito nets) is superior to St. Jude’s or the Red Cross. Keep in mind that EA is a small minority of all charitable giving, and all charitable mindshare (you’ve never seen a bell ringer for EA). The charities you see in public and are asked to contribute to at checkout are almost objectively less efficient at turning dollars into lives saved or vastly improved than charities that quietly work in the poorest parts of the world. EA is having a moral crisis in a way that the Catholic Church has never meaningfully had to face.
What’s odd is that EA ends up supporting billionaire philanthropy as a de facto good, rather than the best option we have under the current system which we should also be trying to dismantle. For about a year now I’ve had a post kicking around in my head that there’s no EA interest in putting numerical bounds on the value of, for example, a strong tenant movement, the end of mass incarceration, a strong labor movement, the end of the drug war, the end of war in general. EA is tainted with centrist stink that looks at problems which are “controversial” in mass media and therefore *must* be actually controversial (ex. Palestine-Israel). Getting more people involved in “doing good” seems like an obvious answer yet most EA solutions seem to be power-flattering answers that want you to make a lot of money and donate it to EA, while making 0 effort to convince your friends or society.
As someone with a statistics degree, EA falls for the same “data wonk” trap a lot of wonks fall into—difficulty to quantify leads you to focusing on quantifiable areas, which are usually correlated with power and the status quo. EA is a very young movement yet they lack the historical knowledge to realize people were once very sure the best thing to do with mentally unwell people was lock them in an asylum for their lives. (The critique calls these last two paragraphs “EA orthodoxy”.)
I’m going to skim through the critique and add stuff where I think it matters.
I’ve avoided participating in EA because I’m not at a point where I’m doing significant philanthropy, and the heavy vibe of all the posts in the community is you need a bundle of keywords to participate. Words like epistemology, prior, Bayesian (I know this one, but seemingly not in the way people use it), utilitarian, objectively. I have a 4 year degree and can’t participate, so we know that around 90% of the world population cannot participate in EA forums. I feared getting torn to shreds for making the point which should be obvious- how do we know what 3 generations from now will want, let alone 3 million? I am especially unconvinced that intelligence in the universe is an automatic good; nor do I believe that 10^1000 intelligent minds is inherently better than 10^10.
Anyone who hasn’t seen bitcoin specifically and 99.9% of all crypto as a full stop ponzi scheme does not deserve the descriptor “rational”. SBF should’ve been viewed as a useful con artist’s mark; EA should’ve been happy to take his money but assumed it was going to collapse. There are dozens of very rational takedowns of crypto out there.
100% this: “At the very least, critics have learned to watch their tone at all costs, and provide a constant stream of unnecessary caveats and reassurances in order to not be labelled “emotional” or “overconfident”.” Centrists hate emotion, and if you’re emotional you have already lost the argument. I have a strong emotional response to people being locked in cages for stealing baby formula, and because of that I am a loser in the eyes of EA. (Does emotion not guide deworming initiatives? Or are EAs just happy to make a number go up? I can’t tell)
“When Stuart Russell argues that AI could pose an existential threat to humanity, he is held up as someone worth listening to –”He wrote the book on AI, you know!” However, if someone of comparable standing in Climatology or Earth-Systems Science, e.g. Tim Lenton or Johan Rockström, says the same for their field, they are ignored, or even pilloried.[39] Moderate statements from the IPCC are used to argue that climate change is “not an existential risk”, but given significant expert disagreement among experts on e.g. deep learning capabilities, it seems very unlikely that a consensus-based “Intergovernmental Panel on Artificial Intelligence” would take a stance anything like as extreme as that of most prominent EAs. This seems like a straightforward example of confirmation bias to us. ”
EAs care a lot about pandemic risk. They were proven right with COVID. What they are still unable to reckon with is the inability of the science to affect mass change. 5% of the world(?) has had COVID, we have no idea if that will lower those people’s life expediencies or QALYs but doctors say it probably will, it has conservatively caused $10 trillion in damage and killed ~7 million people, and ruined most people’s entire 2020 if not continues to ruin their lives on a day to day basis. And yet most of the western world has done very little to contain the disease, regardless of political affiliation or economic status. People called COVID a “baby pandemic” or “baby catastrophe”, a trial run for how humanity will deal with a much harder threat. We completely failed.
Let’s say you had a billion dollars to address “pandemic risk” in the world. Could you actually meaningfully reduce pandemic risk? I’m not sure you can—the reasons for this are heavily aligned with leftist thought, so of course the centrists in EA have no vocabulary to deal with it. (Things to consider are capitalism, white supremacy, the rich at Davos have rigorous covid mitigation in place and therefor do not care that the poor at Walmart do not, the rich control the mass media and their wealth depends on people continuing to shop...)
On twitter I follow the likes of health reporter Helen Branswell, and she retweets mainstream scientists from ex. Stanford, MIT, Harvard; and most health reporters, experts, and scientists cannot fathom the voluntary preventable mass death and disabling event that COVID continues to be. They were told, like many EA practitioners, that “politics” is bad, and we just need “data and science”, and fail to realize that even showing up to work is political, not saying “the government is a death cult” is political, not saying “the hospital I work at is forcing nurses to work 12 hour shifts with crappy COVID mitigation, to save the lives of patients who don’t believe they have COVID yet are infecting the same nurses, and these nurses deserve fair hazard pay for these horrible conditions” is political. If you worked remote for UPMC (largest hospital network in PA, US), you were offered a COVID vaccine before the contractor nurses and janitors who actually walked into COVID hospitals every day. This is a class issue, like it or not, and dumping a billion dollars into it won’t solve class.
One thing that’s been nipping at me that I haven’t seen anyone in EA say—fraud is not inherently bad. I would happily defraud a billionaire who made their fortune grinding up human babies. It seems like EA distanced themselves so thoroughly from SBF because it made the movement look bad and fraudulent, not that the actions were themselves bad. In SBF’s case, I don’t agree with defrauding regular people. But if some moronic 10 millionaire lost 5 of those millions by putting money into something that reeked of a ponzi scheme with no actual value, I’m not losing sleep over that. You could even argue it’s a net good if say, 50% of that ended up donated to EA, and that millionaire wouldn’t have donated it otherwise. I’m not sure if I will get banned for saying this, which is asinine in a movement which will readily praise you letting literal living humans today starve so that we can maybe grow more brains in a jar in 10 million years. If you ban me you’re literally admitting that stealing money is worse than letting someone die.
Most centrists believe whatever elites have defined as “crime” is bad, which makes them fit right into the status quo. Centrists don’t question why nobody is prosecuted for clean water act violations or wealthy tax fraud. Every shoplifting prosecution should give you pause: “objectively” the damage of elite crimes is far worse. https://www.yalelawjournal.org/forum/the-punishment-bureaucracy
I like the callout of “theories of change” and “Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality”. Poisonous funders are, IMO: any recreational drug (drugs are fine but drug companies are not), fast food and hard-to-define junk food, advertising, policing/prosecution/jailing/surveillence/support thereof (Palantir and ICE), 99.9% of crypto (when a good crypto comes out, we’ll know), fossil fuels, cars, probably a supermajority of animal products companies (and I’m not vegetarian).
For about a year now I’ve had a post kicking around in my head that there’s no EA interest in putting numerical bounds on the value of, for example, a strong tenant movement, the end of mass incarceration, a strong labor movement, the end of the drug war, the end of war in general.
If you do get to writing the post you probably want to include that mass incarceration was something Open Phil looked into in detail and spent $130M on before deciding in 2021 that money to GiveWell top charities went farther. I’d be very interested to read the post!
power-flattering answers that want you to make a lot of money and donate it to EA,
Making money to donate hasn’t been a top recommendation within EA for about five years: it still makes sense for some people, but not most.
When you say “donating to EA” that’s ambiguous between “donating it to building the EA movement” and “donating it to charities that EAs think are doing a lot of good”. If you mean the latter I agree with you (ex: see what donation opportunities GWWC marks as “top rated”).
while making 0 effort to convince your friends or society
When people go into this full time we tend to say they work in community building. But that implies more of an “our goal is to get people to become EAs” than is quite right—things like the 80k podcast are often more about spreading ideas than about growing the movement. And a lot of EAs do this individually as well: I’ve written hundreds of posts on EA that are mostly read by my friends, and had a lot of in-person conversations about the ideas.
Let’s say you had a billion dollars to address “pandemic risk” in the world. Could you actually meaningfully reduce pandemic risk? … This is a class issue, like it or not, and dumping a billion dollars into it won’t solve class.
Does emotion not guide deworming initiatives? Or are EAs just happy to make a number go up? I can’t tell
Personally my donations to deworming haven’t been guided by my emotional reaction to parasites. My emotions are just not a very good guide to what most needs doing! I’m emotionally similarly affected by harm to a thousand people as a million: my emotions just aren’t able to handle scale. Emotions matter for motivation, but they’re not much help (and can hurt) in prioritization.
You also write both:
EA should’ve been happy to take his money but assumed it was going to collapse.
And then later on:
I like the callout of “theories of change” and “Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality”. Poisonous funders are, IMO: …99.9% of crypto
Quick thoughts on Effective Altruism and a response to Dr. Sarah Taber’s good thread calling EA a cult (which it functionally is).
I have believed EA thinking for about 8 years, and continue to do so despite the scandals. HOWEVER, I agree fully with this part of “Doing EA Better” [2], which I’ll call “the critique” from now on. : “Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights”.
As a leftist who wants to see the end of capitalism, cops, climate change, wealth inequality, and the dooming of entire nations to death and despair to uphold a white supremacist order, I am not particularly attached to EA or its specific GiveWell top charities. I think EA works best when it’s only an attempt to be “less wrong”, and always iterating through a SCIENTIFIC process.
Without having spent much time in the weeds, I think there’s a strong moral case that a GiveWell cause (say, mosquito nets) is superior to St. Jude’s or the Red Cross. Keep in mind that EA is a small minority of all charitable giving, and all charitable mindshare (you’ve never seen a bell ringer for EA). The charities you see in public and are asked to contribute to at checkout are almost objectively less efficient at turning dollars into lives saved or vastly improved than charities that quietly work in the poorest parts of the world. EA is having a moral crisis in a way that the Catholic Church has never meaningfully had to face.
What’s odd is that EA ends up supporting billionaire philanthropy as a de facto good, rather than the best option we have under the current system which we should also be trying to dismantle. For about a year now I’ve had a post kicking around in my head that there’s no EA interest in putting numerical bounds on the value of, for example, a strong tenant movement, the end of mass incarceration, a strong labor movement, the end of the drug war, the end of war in general. EA is tainted with centrist stink that looks at problems which are “controversial” in mass media and therefore *must* be actually controversial (ex. Palestine-Israel). Getting more people involved in “doing good” seems like an obvious answer yet most EA solutions seem to be power-flattering answers that want you to make a lot of money and donate it to EA, while making 0 effort to convince your friends or society.
As someone with a statistics degree, EA falls for the same “data wonk” trap a lot of wonks fall into—difficulty to quantify leads you to focusing on quantifiable areas, which are usually correlated with power and the status quo. EA is a very young movement yet they lack the historical knowledge to realize people were once very sure the best thing to do with mentally unwell people was lock them in an asylum for their lives. (The critique calls these last two paragraphs “EA orthodoxy”.)
I’m going to skim through the critique and add stuff where I think it matters.
I’ve avoided participating in EA because I’m not at a point where I’m doing significant philanthropy, and the heavy vibe of all the posts in the community is you need a bundle of keywords to participate. Words like epistemology, prior, Bayesian (I know this one, but seemingly not in the way people use it), utilitarian, objectively. I have a 4 year degree and can’t participate, so we know that around 90% of the world population cannot participate in EA forums. I feared getting torn to shreds for making the point which should be obvious- how do we know what 3 generations from now will want, let alone 3 million? I am especially unconvinced that intelligence in the universe is an automatic good; nor do I believe that 10^1000 intelligent minds is inherently better than 10^10.
Anyone who hasn’t seen bitcoin specifically and 99.9% of all crypto as a full stop ponzi scheme does not deserve the descriptor “rational”. SBF should’ve been viewed as a useful con artist’s mark; EA should’ve been happy to take his money but assumed it was going to collapse. There are dozens of very rational takedowns of crypto out there.
100% this: “At the very least, critics have learned to watch their tone at all costs, and provide a constant stream of unnecessary caveats and reassurances in order to not be labelled “emotional” or “overconfident”.” Centrists hate emotion, and if you’re emotional you have already lost the argument. I have a strong emotional response to people being locked in cages for stealing baby formula, and because of that I am a loser in the eyes of EA. (Does emotion not guide deworming initiatives? Or are EAs just happy to make a number go up? I can’t tell)
“When Stuart Russell argues that AI could pose an existential threat to humanity, he is held up as someone worth listening to –”He wrote the book on AI, you know!” However, if someone of comparable standing in Climatology or Earth-Systems Science, e.g. Tim Lenton or Johan Rockström, says the same for their field, they are ignored, or even pilloried.[39] Moderate statements from the IPCC are used to argue that climate change is “not an existential risk”, but given significant expert disagreement among experts on e.g. deep learning capabilities, it seems very unlikely that a consensus-based “Intergovernmental Panel on Artificial Intelligence” would take a stance anything like as extreme as that of most prominent EAs. This seems like a straightforward example of confirmation bias to us. ”
EAs care a lot about pandemic risk. They were proven right with COVID. What they are still unable to reckon with is the inability of the science to affect mass change. 5% of the world(?) has had COVID, we have no idea if that will lower those people’s life expediencies or QALYs but doctors say it probably will, it has conservatively caused $10 trillion in damage and killed ~7 million people, and ruined most people’s entire 2020 if not continues to ruin their lives on a day to day basis. And yet most of the western world has done very little to contain the disease, regardless of political affiliation or economic status. People called COVID a “baby pandemic” or “baby catastrophe”, a trial run for how humanity will deal with a much harder threat. We completely failed.
Let’s say you had a billion dollars to address “pandemic risk” in the world. Could you actually meaningfully reduce pandemic risk? I’m not sure you can—the reasons for this are heavily aligned with leftist thought, so of course the centrists in EA have no vocabulary to deal with it. (Things to consider are capitalism, white supremacy, the rich at Davos have rigorous covid mitigation in place and therefor do not care that the poor at Walmart do not, the rich control the mass media and their wealth depends on people continuing to shop...)
On twitter I follow the likes of health reporter Helen Branswell, and she retweets mainstream scientists from ex. Stanford, MIT, Harvard; and most health reporters, experts, and scientists cannot fathom the voluntary preventable mass death and disabling event that COVID continues to be. They were told, like many EA practitioners, that “politics” is bad, and we just need “data and science”, and fail to realize that even showing up to work is political, not saying “the government is a death cult” is political, not saying “the hospital I work at is forcing nurses to work 12 hour shifts with crappy COVID mitigation, to save the lives of patients who don’t believe they have COVID yet are infecting the same nurses, and these nurses deserve fair hazard pay for these horrible conditions” is political. If you worked remote for UPMC (largest hospital network in PA, US), you were offered a COVID vaccine before the contractor nurses and janitors who actually walked into COVID hospitals every day. This is a class issue, like it or not, and dumping a billion dollars into it won’t solve class.
One thing that’s been nipping at me that I haven’t seen anyone in EA say—fraud is not inherently bad. I would happily defraud a billionaire who made their fortune grinding up human babies. It seems like EA distanced themselves so thoroughly from SBF because it made the movement look bad and fraudulent, not that the actions were themselves bad. In SBF’s case, I don’t agree with defrauding regular people. But if some moronic 10 millionaire lost 5 of those millions by putting money into something that reeked of a ponzi scheme with no actual value, I’m not losing sleep over that. You could even argue it’s a net good if say, 50% of that ended up donated to EA, and that millionaire wouldn’t have donated it otherwise. I’m not sure if I will get banned for saying this, which is asinine in a movement which will readily praise you letting literal living humans today starve so that we can maybe grow more brains in a jar in 10 million years. If you ban me you’re literally admitting that stealing money is worse than letting someone die.
Most centrists believe whatever elites have defined as “crime” is bad, which makes them fit right into the status quo. Centrists don’t question why nobody is prosecuted for clean water act violations or wealthy tax fraud. Every shoplifting prosecution should give you pause: “objectively” the damage of elite crimes is far worse. https://www.yalelawjournal.org/forum/the-punishment-bureaucracy
I like the callout of “theories of change” and “Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality”. Poisonous funders are, IMO: any recreational drug (drugs are fine but drug companies are not), fast food and hard-to-define junk food, advertising, policing/prosecution/jailing/surveillence/support thereof (Palantir and ICE), 99.9% of crypto (when a good crypto comes out, we’ll know), fossil fuels, cars, probably a supermajority of animal products companies (and I’m not vegetarian).
[1] https://twitter.com/sarahtaber_bww/status/1617194799261487108
[2] https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1
If you do get to writing the post you probably want to include that mass incarceration was something Open Phil looked into in detail and spent $130M on before deciding in 2021 that money to GiveWell top charities went farther. I’d be very interested to read the post!
Making money to donate hasn’t been a top recommendation within EA for about five years: it still makes sense for some people, but not most.
When you say “donating to EA” that’s ambiguous between “donating it to building the EA movement” and “donating it to charities that EAs think are doing a lot of good”. If you mean the latter I agree with you (ex: see what donation opportunities GWWC marks as “top rated”).
When people go into this full time we tend to say they work in community building. But that implies more of an “our goal is to get people to become EAs” than is quite right—things like the 80k podcast are often more about spreading ideas than about growing the movement. And a lot of EAs do this individually as well: I’ve written hundreds of posts on EA that are mostly read by my friends, and had a lot of in-person conversations about the ideas.
Effectively addressing risk from future pandemics wouldn’t look like “spend a lot more money on the things we are already doing”. Instead it would be things like the projects listed in Concrete Biosecurity Projects (some of which could be big) or Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics. (Disclosure: I work for a project that’s on both those lists).
Personally my donations to deworming haven’t been guided by my emotional reaction to parasites. My emotions are just not a very good guide to what most needs doing! I’m emotionally similarly affected by harm to a thousand people as a million: my emotions just aren’t able to handle scale. Emotions matter for motivation, but they’re not much help (and can hurt) in prioritization.
You also write both:
And then later on:
These seem like they’re in conflict?