You’ve caught me stuck in bed, and I’m probably the most EA-critical person that regularly posts here, so I’ll take a stab at responding point by point to your list:
It’s good and virtuous to want to help others effectively: to help more rather than less with one’s efforts.
2. Agree.
We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
3. Agree on global poverty and animal welfare, but I think it might be difficult to do “a lot of good” in some catastrophic risk areas.
In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
4. Agreed, although I should note that efforts to better target efforts can have diminishing returns, especially when a problem is speculative and not well understood.
In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is what’s decision-relevant.)
5. Agreed for global poverty and animal welfare, but I’m mixed on this for speculative causes like AI risk, where theres a decent chance that efforts could backfire and make things worse, and theres no real way to tell until after the fact.
Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
6. Agreed. Unfortunately, EA often fails to live up to this idea.
So it’s good and virtuous to use quantitatively tools and evidence wisely.
7. Agreed, but see above.
GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
8. Agreed, I like givewell in general.
So it’s good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
9. Agreed, with regards to the area givewell specialises in.
There’s no good reason to think that GiveWell’s top charities are net harmful.[1]
10. I think the chances that givewells top charities are net good is very high, but not 100%. See the mosquito net fishing for a possible pitfall.
But even if you’re the world’s most extreme aid skeptic, it’s clearly good and virtuous to voluntary redistribute your own wealth to some of the world’s poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
11.Agreed.
Many are repelled by how “hands-off” effective philanthropy is compared to (e.g.) local volunteering. But it’s good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
12.Agreed, but sometimes being hands on can be helpful with improving lives. For example, being hands on can allow one to more easily recieve feedback and understand overlooked problems with an intervention, and to ensure it goes to the right place. I don’t think voluntourism is good at this, but I would like to see support for more grassroots projects by people actually from impoverished communities.
Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value “sure things”. In such cases, this is worth doing.
Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
14. This is so broad as to be trivially true, but in practice I often disagree with the judgements here.
The above point encompasses much relating to politics and “systemic change”, in addition to longtermist long-shots. It’s very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropy—just note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
15. Generally agree.
Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
16. Anti-capitalist is a pretty broad tent. I agree that some people who adopt that label are dumb and naive, but others have pretty good ideas. I think it would be really dumb if capitalism was still the dominant system 1000 years from now, and there are political interventions that can be predicted to reliably help people. I think “overthrow the government for communism” gets the sideye: “universal healthcare” does not.
In general, I don’t think that doing good through one’s advocacy should be treated as a substitute for “putting one’s money where one’s mouth is”. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But I’m completely open to judging political donations (when epistemically justified) as constituting “effective philanthropy”—I don’t think we should put narrow constraints on the latter concept, or limit it to traditional charities.
Some people are poor and cannot contribute much without kneecapping themselves. I don’t think those people are useless, and I think for a lot of people political action is a rational choice for how to effectively help. Similarly, some people are very good at political action, but not so good at making large amounts of money, and they should do the former, not the latter.
Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
I agree it provides useful tools. But if you take the tools like expected value too seriously, you end up doing insane things (see SBF). In general EA is way too willing to swallow the math even when it gives bad results.
Agreed, depending on what you mean by “reasonable”.
Ethical cosmopolitanism is correct: It’s better and more virtuous for one’s sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more
Agreed, with the caveat that we are talking about beings that currently exist or have a high probability of existing in the future.
Insofar as one’s natural sympathy falls short, it’s better and more virtuous to at least be “continent” (as Aristotle would say) and allow one’s reason to set one on the path that the fully virtuous agent would follow from apt feelings.
The term “fully virtuous agent” raises my eyebrows. I don’t think that’s a thing that can actually exist.
Since we can do so much good via effective donations, we have—in principle—excellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
It looks like it, although of course this could be negated if they got their fortunes from more harmful than average means. I don’t see evidence that this is the case for these examples.
Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
agreed
When the stakes are high, there are no “safe” options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWell’s top charities, would make you causally responsible for approximately ten deaths every year. That’s really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
Agreed, although I’ll note that from my perspective, persuading an EAer to donate to AI x-risk instead of givewell will have a similar effect, and should be treated to the same level of scrutiny.
Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They don’t, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
Agreed for some critiques of Givewell/AMF in particular, like the recent time article. However, I don’t think this applies to critiques of AI x-risk, because I don’t think AI x-risk charities are effective. If that turns them away and they donate to oxfam or something instead, that is a net good.
Deliberately or negligently making the world worse is vicious, bad, and wrong.
Agreed.
Most (all?) of us are not as effectively beneficent as would be morally ideal.
Agreed
Our moral motivations are very shaped by social norms and expectations—by community and culture.
Agreed
This means it is good and virtuous to be public about one’s efforts to do good effectively.
Generally agreed.
If there’s a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
Agreed
In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
Agreed, but “in principle” is doing a lot of work here. I think the initial Bolshevik party broadly fit this description, for an example of how this could go wrong.
For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
Depends on which community we are talking about. See again: the Bolsheviks.
That’s what the “Effective Altruism” community constitutively aims to do.
agreed.
It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
Agreed on all statements.
Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
Agreed.
Such reflection has indeed happened. (I don’t know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (It’s not entirely new, of course: SBF’s fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more “professional” in various ways.)
There has definitely been some reflections and changes, many of which I approve of. But it has not been smooth sailing, and I think the response to other scandals leaves a lot to be desired. It remains to be seen as to whether ongoing efforts are enough.
No community is foolproof against bad actors. It would not be fair or reasonable to tar others with “guilt by association”, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
I agree that individuals should not be tarred by SBF, but I don’t think this same protection applies to the movement as a whole. We care about outcomes. If a fringe minority does bad things, those things still occur. SBF conducted one of the largest frauds in history: you don’t see OXFAM having this kind of effect. It’s n=1 for billion dollar frauds, but the n is a lot higher if we consider abuse of power, sexual harrassment, and other smaller harms.
The more power and influence EA amasses, the more appropriate it is to be concerned about bad things within the community.
The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
I think EA has totally flubbed on AI x-risk. Therefore, if I have the choice between recommending them EA in general, or just givewells top charities, doing the latter will be better.
The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
Agreed, but certain types of dickish behaviour are a flaw of the community that has a detrimental effect on it’s health, and make it’s decision making and effectiveness worse.
If you find the EA community annoying, it’s fine to say so (and reject the “EA” label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or don’t want to be associated with them.
Agreed. I generally steer people to givewell or it’s charities, rather than
None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)
I think some of the claims are less valuable outside of utilitarianism, but whatever.
With that all answered, let me add my own take on why I don’t recommend EA to people anymore:
I think that the non-speculative side of EA (global poverty and animal welfare) is nice and good, and is on net making the world a better place. I think the speculative side of EA, and in particular AI risk, contains some reasonable people but also enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory.
Most of this bad thinking originates from the Rationalist community, which is generally a punchline in wider intellectual circles. I think the Rationalist community is on the whole epistemically atrocious and overconfident on things for baffling reasons, and tend to be hero worshipping and spread a lot of factually dubious ideas with very poor justification. I find some of the heroes they adore to be unpleasant people who spread harmful norms, ideas, and behaviour.
Putting it all together, I think that overall EA is a net positive, but that recommending EA is not the most positive thing you can do. Attacking the bad parts of EA while acknowledging that malaria nets are still good seems like a completely rational and good thing to do, either to put pressure on EA to improve, or to provide impetus for the good parts of EA to split off.
> Says he’s stuck in bed and only going to take a stab
> Posts a thorough, thoughtful, point-by-point response to the OP in good faith
> Just titotal things
- - - - - - - - - - - - - - -
On a serious note, as Richard says it seems like you agree with most of his points, at least on the ‘EA values/EA-as-ideas’ set of things. It sounds like atm you think that you can’t recommend EA without recommending the speculative AI part of it, which I don’t think has to be true.
I continue to appreciate your thoughts and contributions to the Forum and have learned a lot from them, and given the reception you get[1] I think I’m clearly not alone there :)
“enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory”
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though I’ve ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where it’s clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Given how many of the frontier AI labs have an EA-related origin story, I think it’s totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.
That seems reasonable to me! I’m most confident that the underlying principles of effective altruism are important and good, and you seem to agree on that. I agree there’s plenty of room for people to disagree about speculative cause prioritization, and if you think the EA movement is getting things systematically wrong there then it makes sense to (in effect, not in these words) “do EA better” by just sticking with GiveWell or whatever you think is actually best.
You’ve caught me stuck in bed, and I’m probably the most EA-critical person that regularly posts here, so I’ll take a stab at responding point by point to your list:
Agree.
2. Agree.
3. Agree on global poverty and animal welfare, but I think it might be difficult to do “a lot of good” in some catastrophic risk areas.
4. Agreed, although I should note that efforts to better target efforts can have diminishing returns, especially when a problem is speculative and not well understood.
5. Agreed for global poverty and animal welfare, but I’m mixed on this for speculative causes like AI risk, where theres a decent chance that efforts could backfire and make things worse, and theres no real way to tell until after the fact.
6. Agreed. Unfortunately, EA often fails to live up to this idea.
7. Agreed, but see above.
8. Agreed, I like givewell in general.
9. Agreed, with regards to the area givewell specialises in.
10. I think the chances that givewells top charities are net good is very high, but not 100%. See the mosquito net fishing for a possible pitfall.
11.Agreed.
12.Agreed, but sometimes being hands on can be helpful with improving lives. For example, being hands on can allow one to more easily recieve feedback and understand overlooked problems with an intervention, and to ensure it goes to the right place. I don’t think voluntourism is good at this, but I would like to see support for more grassroots projects by people actually from impoverished communities.
13. I agree in principle, but disagree in practice given the “hits based giving” of EA can be pretty bad. The effectiveness of hits based giving very much depends on how much each miss costs and the likely effectiveness of a hit. I don’t think the 100,000 grant for a failed video game was a good idea, nor the $28000 to print out harry potter fanfiction that was free online anyway.
14. This is so broad as to be trivially true, but in practice I often disagree with the judgements here.
15. Generally agree.
16. Anti-capitalist is a pretty broad tent. I agree that some people who adopt that label are dumb and naive, but others have pretty good ideas. I think it would be really dumb if capitalism was still the dominant system 1000 years from now, and there are political interventions that can be predicted to reliably help people. I think “overthrow the government for communism” gets the sideye: “universal healthcare” does not.
Some people are poor and cannot contribute much without kneecapping themselves. I don’t think those people are useless, and I think for a lot of people political action is a rational choice for how to effectively help. Similarly, some people are very good at political action, but not so good at making large amounts of money, and they should do the former, not the latter.
I agree it provides useful tools. But if you take the tools like expected value too seriously, you end up doing insane things (see SBF). In general EA is way too willing to swallow the math even when it gives bad results.
Agreed, depending on what you mean by “reasonable”.
Agreed, with the caveat that we are talking about beings that currently exist or have a high probability of existing in the future.
The term “fully virtuous agent” raises my eyebrows. I don’t think that’s a thing that can actually exist.
Agreed, with emphasis on the “permissible means”.
It looks like it, although of course this could be negated if they got their fortunes from more harmful than average means. I don’t see evidence that this is the case for these examples.
agreed
Agreed, although I’ll note that from my perspective, persuading an EAer to donate to AI x-risk instead of givewell will have a similar effect, and should be treated to the same level of scrutiny.
Agreed for some critiques of Givewell/AMF in particular, like the recent time article. However, I don’t think this applies to critiques of AI x-risk, because I don’t think AI x-risk charities are effective. If that turns them away and they donate to oxfam or something instead, that is a net good.
Agreed.
Agreed
Agreed
Generally agreed.
Agreed
Agreed, but “in principle” is doing a lot of work here. I think the initial Bolshevik party broadly fit this description, for an example of how this could go wrong.
Depends on which community we are talking about. See again: the Bolsheviks.
agreed.
Agreed on all statements.
Agreed.
There has definitely been some reflections and changes, many of which I approve of. But it has not been smooth sailing, and I think the response to other scandals leaves a lot to be desired. It remains to be seen as to whether ongoing efforts are enough.
I agree that individuals should not be tarred by SBF, but I don’t think this same protection applies to the movement as a whole. We care about outcomes. If a fringe minority does bad things, those things still occur. SBF conducted one of the largest frauds in history: you don’t see OXFAM having this kind of effect. It’s n=1 for billion dollar frauds, but the n is a lot higher if we consider abuse of power, sexual harrassment, and other smaller harms.
The more power and influence EA amasses, the more appropriate it is to be concerned about bad things within the community.
I think EA has totally flubbed on AI x-risk. Therefore, if I have the choice between recommending them EA in general, or just givewells top charities, doing the latter will be better.
Agreed, but certain types of dickish behaviour are a flaw of the community that has a detrimental effect on it’s health, and make it’s decision making and effectiveness worse.
Agreed. I generally steer people to givewell or it’s charities, rather than
I think some of the claims are less valuable outside of utilitarianism, but whatever.
With that all answered, let me add my own take on why I don’t recommend EA to people anymore:
I think that the non-speculative side of EA (global poverty and animal welfare) is nice and good, and is on net making the world a better place. I think the speculative side of EA, and in particular AI risk, contains some reasonable people but also enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory.
Most of this bad thinking originates from the Rationalist community, which is generally a punchline in wider intellectual circles. I think the Rationalist community is on the whole epistemically atrocious and overconfident on things for baffling reasons, and tend to be hero worshipping and spread a lot of factually dubious ideas with very poor justification. I find some of the heroes they adore to be unpleasant people who spread harmful norms, ideas, and behaviour.
Putting it all together, I think that overall EA is a net positive, but that recommending EA is not the most positive thing you can do. Attacking the bad parts of EA while acknowledging that malaria nets are still good seems like a completely rational and good thing to do, either to put pressure on EA to improve, or to provide impetus for the good parts of EA to split off.
> Says he’s stuck in bed and only going to take a stab
> Posts a thorough, thoughtful, point-by-point response to the OP in good faith
> Just titotal things
- - - - - - - - - - - - - - -
On a serious note, as Richard says it seems like you agree with most of his points, at least on the ‘EA values/EA-as-ideas’ set of things. It sounds like atm you think that you can’t recommend EA without recommending the speculative AI part of it, which I don’t think has to be true.
I continue to appreciate your thoughts and contributions to the Forum and have learned a lot from them, and given the reception you get[1] I think I’m clearly not alone there :)
You’re probably by far the highest-upvoted person who considers them EA critical here? (though maybe Habryka would also count)
“enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory”
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though I’ve ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where it’s clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Given how many of the frontier AI labs have an EA-related origin story, I think it’s totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.
That seems reasonable to me! I’m most confident that the underlying principles of effective altruism are important and good, and you seem to agree on that. I agree there’s plenty of room for people to disagree about speculative cause prioritization, and if you think the EA movement is getting things systematically wrong there then it makes sense to (in effect, not in these words) “do EA better” by just sticking with GiveWell or whatever you think is actually best.
I enjoyed reading your responses to these points. Thanks for taking the time to write them out.