Thank you for breaking this up into specific sections, I think that this will encourage better discussion on the object-level points, and hopefully keep up community discussion from the initial post. I want to thank the authors for the research and effort that they’ve put into this series, and I sincerely hope that it has a positive impact on the community.
* * *
From a personal perspective, I feel like these summary points:
The Effective Altruism movement has rapidly grown in size and power, and we have a responsibility to ensure that it lives up to its goals
EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques
Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights
Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest
EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation
Can themselves be summarised into 2 different thrusts: [1]
Critique 1: The Effective Altruism movement needs to reform its internal systems
From points 1, 2, and 5. Under this view: EA has grown rapidly, however its institutional structure has not adapted optimally, leading to a hierarchical, homogenous, and unaccountable community structure. This means that it does not have the feedback loops necessary to identify cases where there are conflicts of interest, or where the evidence no longer matches the movement’s cause prioritisation/financial decisions. In order to change this, the EA movement needs to accept various reforms (which are discussed later in the DEAB piece).
Critique 2: The Effective Altruism movement is wrong
From points 3 and 4. Under this view: Many EA beliefs are not supported by evidence or subject-matter experts. Instead, the movement does little real-world good and its ‘revealed preference’ is to reputation launder for tech billionaires in the Global North. The movement is rotten to the core, and the best way to improve the world would be to end the enterprise altogether.
(I personally, and presumably most people who have self-selected into reading and posting on the EA Forum, have a lot more sympathy for Critique 1 rather than Critique 2.)
* * *
These are two very different, perhaps almost orthogonal approaches. They remind me of the classic IIDM split between ‘improving how institutions make their decisions’ and ‘improving the decisions that the institutions make’. I think this also links to:
EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)
In a simplistic sense, this remains true. But in another sense, it’s a bit of EA-Judo.[2] To the extent the evidence supports having very high credence of AI risk this century (and this isn’t by any means unanimous in the community or the forum), or that the world massively underinvests in work to prevent the next global pandemic and mitigate its impacts, then these priorities are what EA is. If were to change what the community thinks about them radically in response to deep critiques, then the resulting movement wouldn’t look like EA at all and many, both current supporters and critics, probably wouldn’t think EA would be a useful label for the resulting movement.
Perhaps this is related to the overarching theme of being open to “deep critiques”. However this summary, and a quick Ctrl+F of the original post, didn’t give me a clear definition to me of what a ‘deep critique’ is[3] or examples of them which the movement is bad at engaging with. Of the two examples which are referenced, I tend to view Thorstad’s blog as following critique 1 - that the EA movement does not take its critics/issues seriously enough and needs to change because of it. However, the forthcoming book (which I am intending to review subject to irl constraints) seems to be more along the lines of critique 2. I think most of the movements most caustic critics, and the ones that probably get most air-time, follow this line of attack.[4]
From my perspective, a lot of the forum pushback probably comes from the intuition that if EA is open tocritique 2 and accepts its conclusions and implications, then it won’t be EA anymore. So I think perhaps there needs to be more attention of separating discussions which are closer to critique 1 to critique 2. This isn’t not to say that EA shouldn’t engage with arguments along the lines of critique 2, but I think it’s worth being open about the strengths of those critiques, and the corresponding strength of evidence that would be needed to support them.
For clarity, I’m trying to summarise the arguments I think they imply, and not making arguments I personally agree with. I also note that as there a multiple authors from the account they may vary in how much the agree/disagree with the quality of my summarisation.
This refers to the tendency to refute any critique of EA by saying, “if you could show this was the right thing to do, then that would be EA.” Which may be philosophically consistent but has probably convinced ~0 people in real life.
Thank you for breaking this up into specific sections, I think that this will encourage better discussion on the object-level points, and hopefully keep up community discussion from the initial post. I want to thank the authors for the research and effort that they’ve put into this series, and I sincerely hope that it has a positive impact on the community.
* * *
From a personal perspective, I feel like these summary points:
Can themselves be summarised into 2 different thrusts: [1]
Critique 1: The Effective Altruism movement needs to reform its internal systems
From points 1, 2, and 5. Under this view: EA has grown rapidly, however its institutional structure has not adapted optimally, leading to a hierarchical, homogenous, and unaccountable community structure. This means that it does not have the feedback loops necessary to identify cases where there are conflicts of interest, or where the evidence no longer matches the movement’s cause prioritisation/financial decisions. In order to change this, the EA movement needs to accept various reforms (which are discussed later in the DEAB piece).
Critique 2: The Effective Altruism movement is wrong
From points 3 and 4. Under this view: Many EA beliefs are not supported by evidence or subject-matter experts. Instead, the movement does little real-world good and its ‘revealed preference’ is to reputation launder for tech billionaires in the Global North. The movement is rotten to the core, and the best way to improve the world would be to end the enterprise altogether.
(I personally, and presumably most people who have self-selected into reading and posting on the EA Forum, have a lot more sympathy for Critique 1 rather than Critique 2.)
* * *
These are two very different, perhaps almost orthogonal approaches. They remind me of the classic IIDM split between ‘improving how institutions make their decisions’ and ‘improving the decisions that the institutions make’. I think this also links to:
In a simplistic sense, this remains true. But in another sense, it’s a bit of EA-Judo.[2] To the extent the evidence supports having very high credence of AI risk this century (and this isn’t by any means unanimous in the community or the forum), or that the world massively underinvests in work to prevent the next global pandemic and mitigate its impacts, then these priorities are what EA is. If were to change what the community thinks about them radically in response to deep critiques, then the resulting movement wouldn’t look like EA at all and many, both current supporters and critics, probably wouldn’t think EA would be a useful label for the resulting movement.
Perhaps this is related to the overarching theme of being open to “deep critiques”. However this summary, and a quick Ctrl+F of the original post, didn’t give me a clear definition to me of what a ‘deep critique’ is[3] or examples of them which the movement is bad at engaging with. Of the two examples which are referenced, I tend to view Thorstad’s blog as following critique 1 - that the EA movement does not take its critics/issues seriously enough and needs to change because of it. However, the forthcoming book (which I am intending to review subject to irl constraints) seems to be more along the lines of critique 2. I think most of the movements most caustic critics, and the ones that probably get most air-time, follow this line of attack.[4]
From my perspective, a lot of the forum pushback probably comes from the intuition that if EA is open to critique 2 and accepts its conclusions and implications, then it won’t be EA anymore. So I think perhaps there needs to be more attention of separating discussions which are closer to critique 1 to critique 2. This isn’t not to say that EA shouldn’t engage with arguments along the lines of critique 2, but I think it’s worth being open about the strengths of those critiques, and the corresponding strength of evidence that would be needed to support them.
For clarity, I’m trying to summarise the arguments I think they imply, and not making arguments I personally agree with. I also note that as there a multiple authors from the account they may vary in how much the agree/disagree with the quality of my summarisation.
This refers to the tendency to refute any critique of EA by saying, “if you could show this was the right thing to do, then that would be EA.” Which may be philosophically consistent but has probably convinced ~0 people in real life.
If the authors, or other forum users, could help me out here I would definitely appreciate it :)
Some examples:
* https://www.currentaffairs.org/2022/09/defective-altruism
* https://twitter.com/timnitGebru/status/1614366414671937536
* https://www.youtube.com/watch?v=B_M64BSzcRY