Thank you for breaking this up into specific sections, I think that this will encourage better discussion on the object-level points, and hopefully keep up community discussion from the initial post. I want to thank the authors for the research and effort that theyâve put into this series, and I sincerely hope that it has a positive impact on the community.
* * *
From a personal perspective, I feel like these summary points:
The Effective Altruism movement has rapidly grown in size and power, and we have a responsibility to ensure that it lives up to its goals
EA is too homogenous, hierarchical, and intellectually insular, with a hard core of âorthodoxâ thought and powerful barriers to âdeepâ critiques
Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights
Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest
EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation
Can themselves be summarised into 2 different thrusts: [1]
Critique 1: The Effective Altruism movement needs to reform its internal systems
From points 1, 2, and 5. Under this view: EA has grown rapidly, however its institutional structure has not adapted optimally, leading to a hierarchical, homogenous, and unaccountable community structure. This means that it does not have the feedback loops necessary to identify cases where there are conflicts of interest, or where the evidence no longer matches the movementâs cause prioritisation/âfinancial decisions. In order to change this, the EA movement needs to accept various reforms (which are discussed later in the DEAB piece).
Critique 2: The Effective Altruism movement is wrong
From points 3 and 4. Under this view: Many EA beliefs are not supported by evidence or subject-matter experts. Instead, the movement does little real-world good and its ârevealed preferenceâ is to reputation launder for tech billionaires in the Global North. The movement is rotten to the core, and the best way to improve the world would be to end the enterprise altogether.
(I personally, and presumably most people who have self-selected into reading and posting on the EA Forum, have a lot more sympathy for Critique 1 rather than Critique 2.)
* * *
These are two very different, perhaps almost orthogonal approaches. They remind me of the classic IIDM split between âimproving how institutions make their decisionsâ and âimproving the decisions that the institutions makeâ. I think this also links to:
EAs should see EA as a set of intentions and questions (âWhat does it mean to âdo the most goodâ, and how can I do it?â) rather than a set of answers (âAI is the highest-impact cause area, then maybe biorisk.â)
In a simplistic sense, this remains true. But in another sense, itâs a bit of EA-Judo.[2] To the extent the evidence supports having very high credence of AI risk this century (and this isnât by any means unanimous in the community or the forum), or that the world massively underinvests in work to prevent the next global pandemic and mitigate its impacts, then these priorities are what EA is. If were to change what the community thinks about them radically in response to deep critiques, then the resulting movement wouldnât look like EA at all and many, both current supporters and critics, probably wouldnât think EA would be a useful label for the resulting movement.
Perhaps this is related to the overarching theme of being open to âdeep critiquesâ. However this summary, and a quick Ctrl+F of the original post, didnât give me a clear definition to me of what a âdeep critiqueâ is[3] or examples of them which the movement is bad at engaging with. Of the two examples which are referenced, I tend to view Thorstadâs blog as following critique 1 - that the EA movement does not take its critics/âissues seriously enough and needs to change because of it. However, the forthcoming book (which I am intending to review subject to irl constraints) seems to be more along the lines of critique 2. I think most of the movements most caustic critics, and the ones that probably get most air-time, follow this line of attack.[4]
From my perspective, a lot of the forum pushback probably comes from the intuition that if EA is open tocritique 2 and accepts its conclusions and implications, then it wonât be EA anymore. So I think perhaps there needs to be more attention of separating discussions which are closer to critique 1 to critique 2. This isnât not to say that EA shouldnât engage with arguments along the lines of critique 2, but I think itâs worth being open about the strengths of those critiques, and the corresponding strength of evidence that would be needed to support them.
For clarity, Iâm trying to summarise the arguments I think they imply, and not making arguments I personally agree with. I also note that as there a multiple authors from the account they may vary in how much the agree/âdisagree with the quality of my summarisation.
This refers to the tendency to refute any critique of EA by saying, âif you could show this was the right thing to do, then that would be EA.â Which may be philosophically consistent but has probably convinced ~0 people in real life.
Thank you for breaking this up into specific sections, I think that this will encourage better discussion on the object-level points, and hopefully keep up community discussion from the initial post. I want to thank the authors for the research and effort that theyâve put into this series, and I sincerely hope that it has a positive impact on the community.
* * *
From a personal perspective, I feel like these summary points:
Can themselves be summarised into 2 different thrusts: [1]
Critique 1: The Effective Altruism movement needs to reform its internal systems
From points 1, 2, and 5. Under this view: EA has grown rapidly, however its institutional structure has not adapted optimally, leading to a hierarchical, homogenous, and unaccountable community structure. This means that it does not have the feedback loops necessary to identify cases where there are conflicts of interest, or where the evidence no longer matches the movementâs cause prioritisation/âfinancial decisions. In order to change this, the EA movement needs to accept various reforms (which are discussed later in the DEAB piece).
Critique 2: The Effective Altruism movement is wrong
From points 3 and 4. Under this view: Many EA beliefs are not supported by evidence or subject-matter experts. Instead, the movement does little real-world good and its ârevealed preferenceâ is to reputation launder for tech billionaires in the Global North. The movement is rotten to the core, and the best way to improve the world would be to end the enterprise altogether.
(I personally, and presumably most people who have self-selected into reading and posting on the EA Forum, have a lot more sympathy for Critique 1 rather than Critique 2.)
* * *
These are two very different, perhaps almost orthogonal approaches. They remind me of the classic IIDM split between âimproving how institutions make their decisionsâ and âimproving the decisions that the institutions makeâ. I think this also links to:
In a simplistic sense, this remains true. But in another sense, itâs a bit of EA-Judo.[2] To the extent the evidence supports having very high credence of AI risk this century (and this isnât by any means unanimous in the community or the forum), or that the world massively underinvests in work to prevent the next global pandemic and mitigate its impacts, then these priorities are what EA is. If were to change what the community thinks about them radically in response to deep critiques, then the resulting movement wouldnât look like EA at all and many, both current supporters and critics, probably wouldnât think EA would be a useful label for the resulting movement.
Perhaps this is related to the overarching theme of being open to âdeep critiquesâ. However this summary, and a quick Ctrl+F of the original post, didnât give me a clear definition to me of what a âdeep critiqueâ is[3] or examples of them which the movement is bad at engaging with. Of the two examples which are referenced, I tend to view Thorstadâs blog as following critique 1 - that the EA movement does not take its critics/âissues seriously enough and needs to change because of it. However, the forthcoming book (which I am intending to review subject to irl constraints) seems to be more along the lines of critique 2. I think most of the movements most caustic critics, and the ones that probably get most air-time, follow this line of attack.[4]
From my perspective, a lot of the forum pushback probably comes from the intuition that if EA is open to critique 2 and accepts its conclusions and implications, then it wonât be EA anymore. So I think perhaps there needs to be more attention of separating discussions which are closer to critique 1 to critique 2. This isnât not to say that EA shouldnât engage with arguments along the lines of critique 2, but I think itâs worth being open about the strengths of those critiques, and the corresponding strength of evidence that would be needed to support them.
For clarity, Iâm trying to summarise the arguments I think they imply, and not making arguments I personally agree with. I also note that as there a multiple authors from the account they may vary in how much the agree/âdisagree with the quality of my summarisation.
This refers to the tendency to refute any critique of EA by saying, âif you could show this was the right thing to do, then that would be EA.â Which may be philosophically consistent but has probably convinced ~0 people in real life.
If the authors, or other forum users, could help me out here I would definitely appreciate it :)
Some examples:
* https://ââwww.currentaffairs.org/ââ2022/ââ09/ââdefective-altruism
* https://ââtwitter.com/ââtimnitGebru/ââstatus/ââ1614366414671937536
* https://ââwww.youtube.com/ââwatch?v=B_M64BSzcRY