Originally from the US, but have been living, studying and working around the world since 2018 and look forward to keeping up the adventure! EA adjacent—very interested in trying to have a positive impact, but not quite resonating with upper case “Effective Altruism”. Happy to share more
—
Strongly values-driven operator with diverse analytical and managerial skillset, always excited to drive solutions for positive impact.
Dynamism | Empiricism | Collaboration | Solutions-Orientation | Mission Centricity
Brennan W.
Thank you so much for taking on this project and communicating the results! I find this kind of work highly valuable and would love to see similar initiatives conducted more regularly across the spectrum of topics where there are gaps between the relevant EA communities and non-EA communities.
It’s very encouraging to see a good faith attempt at “worldview diversification” in practice :)
2- makes sense!
1,3,4- Thanks for sharing (the NYT summary isn’t working for me unfortunately) but I see your reasoning here that the intention and/or direction of the attempted ouster may have been “good”.
However, I believe the actions themselves represent a very poor approach to governance and demonstrate a very narrow focus that clearly didn’t appropriately consider many of the key stakeholders involved. Even assuming the best intentions, in my perspective, when a person has been placed on the board of such a consequential organization and is explicitly tasked with helping to ensure effective governance, the degree to which this situation was handled poorly is enough for me to come away believing that the “bad” of their approach outweighs the potential “good” of their intentions.
Unfortunately it seems likely that this entire situation will wind up having a back-fire effect from what was (we assume) intended by creating a significant amount of negative publicity for and sentiment towards the AI safety community (and EA). At the very least, there is now a new (all male 🤔 but that’s a whole other thread to expand upon) board with members that seem much less likely to be concerned about safety. And now Sam and the less cautious cohort within the company seem to have a significant amount of momentum and good will behind them internally which could embolden them along less cautious paths.
To bring it back to the “good guy bad guy” framing. Maybe I could buy that the board members were “good guys” as concerned humans, but “bad guys” as board members.
I’m sure there are many people on this forum who could define my attempted points much more clearly in specific philosophical terms 😅 but I hope the general ideas came through coherently enough to add some value to the thread.
Would love to hear your thoughts and any counter points or alternative perspectives!
Hey Ryan :)
I definitely agree that this situation is disappointing, that there is a wedge between the AI safety community and Silicon Valley mainstream, and that we have much to learn.
However, I would push back on the phrasing “we are at least the good guys” for several reasons. Apologies if this seems nit picky or uncharitable 😅 just caught my attention and I hoped to start a dialogue
The statement suggests we have a much clearer picture of the situation and factors at play than I believe anyone currently has (as of 22 Nov 2023)
The “we” phrasing seems to suggest that the members of the board in question are representative of EA as a group a. I don’t believe their views or actions are well enough known to assess how in line they are with general EA sentiment b. I don’t think the EA community has a strong consensus on the issues of this particular case
Many people, in good faith and with substantive arguments, come to the opposite conclusion and see the actions of the board as having been “bad”, and are highly critical of the potential influence EA may have had in the situation
Flattening the situation to “good guys” and “bad guys” seems to be a bit of a rhetorical trap that is risky from an epistemological perspective. (I’m sure you have much more nuanced underlying reasons and pieces of evidence, and I completely respect using a rhetorical shortcut to simplify a point—apologies for the pedantry!)
Maybe on a more interesting note, I actually interpret this case quite differently and think that the board made a serious mistake and come out of this as the “less favorable” party. I’d love to discuss in more depth about your reasons for seeing their actions positively and would be happy to share more about why I see them negatively if you’re interested 😊
I would strongly push back against the idea that “insinuation and ‘political’ critique’” are all that critics have. Currently posting from my phone before bed, but happy to follow up at a later date once I have some free time with a more in depth and substantive discussion on the matter if you’d be interested :)
For this quick message though I hope it is at least fair to suggest that dismissing critiques off hand is potentially risky as we are naturally inclined to steal man our own favored conclusions and straw man arguments against, which doesn’t do us any favors epistemologically speaking
I believe the TIME article has been updated since its original publication to reflect your response. If you have the chance, would you be able to comment on the updated version?
Excerpt taken as of 18:30 PST 3 Feb 2023:“In an email following the publication of this article, Wise elaborated. “We’re horrified by the allegations made in this article. A core part of our work is addressing harmful behavior, because we think it’s essential that this community has a good culture where people can do their best work without harassment or other mistreatment,” Wise wrote to TIME. “The incidents described in this article include cases where we already took action, like banning the accused from our spaces. For cases we were not aware of, we will investigate and take appropriate action to address the problem.””
Hiring Well: Research project calling participants
Thank you for the very in depth post! I’ve had a lot of conversations about the subject myself over the past several months and considered writing a similarly themed post, but it’s always nice to find that some very talented people have already done a fantastic job carefully considering the topic and organizing the ideas into a coherent piece :)
On that note, I’m currently conducting a thesis on effective hiring / selection methods in social-mission startups with the hopes of creating a free toolkit to help facilitate recruitment in EA (and other impact-driven) orgs. If you have any bandwidth I’d love to learn more about your experience regarding the talent ecosystem in EA and see if I could better tailor my project to help address some of the gaps/opportunities you’ve identified
Thank you for the very in depth post! I’ve had a lot of conversations about the subject myself over the past several months and considered writing a similarly themed post, but it’s always nice to find that some very talented people have already done a fantastic job carefully considering the topic and organizing the ideas into a coherent piece :)
On that note, I’m currently conducting a thesis on effective hiring / selection methods in social-mission startups with the hopes of creating a free toolkit to help facilitate recruitment in EA (and other impact-driven) orgs. If you have any bandwidth I’d love to learn more about your experience regarding the talent ecosystem in EA and see if I could better tailor my project to help address some of the gaps/opportunities you’ve identified
Thank you for the very in depth post! I’ve had a lot of conversations about the subject myself over the past several months and considered writing a similarly themed post, but it’s always nice to find that some very talented people have already done a fantastic job carefully considering the topic and organizing the ideas into a coherent piece :)
On that note, I’m currently conducting a thesis on effective hiring / selection methods in social-mission startups with the hopes of creating a free toolkit to help facilitate recruitment in EA (and other impact-driven) orgs. If you have any bandwidth I’d love to learn more about your experience regarding the talent ecosystem in EA and see if I could better tailor my project to help address some of the gaps/opportunities you’ve identified
The tent and campground analogy and vocabulary is very helpful, thank you! I wish I’d had it in my toolkit a few weeks ago when trying to discuss the nuances of community building at an EA retreat—probably would’ve saved a lot of time and made for better mutual understanding. Glad to have it going forward though!
Thank you for posting and putting this (and the other resources you’ve linked) together!
I’m very interested in developing the talent supply chain in the EA community and see a lot of value in these kinds of databases. However, the added value of a database can be much greater the more comprehensive it is, tempered by appropriate tagging for filterability → it would be a nice goal to work towards integrating this database with the 80k Job Board database, as well as connecting more directly with EA Orgs to bridge the gap between supply and demand for talent and opportunities.I cannot speak for the newly launched EA Hiring Network, but I hope to do some direct work building out the talent infrastructure of the EA ecosystem and believe there would be a long term strategic value to integrating all opportunities (full time, volunteer, training, internship—no need to limit by category so long as the tags allow for filtering) into a cohesive database in tandem with a talent database which would allow for much more efficient talent matching and allocation across EA orgs.
While any progress in that direction would be at the soonest many months in the future, is collaborating on this kind of initiative something that you would be interested in? If so, I’d love to connect and discuss potential synergies :)
@Lee McC—EA Forum (effectivealtruism.org)
Thank you for posting!
I am about to enter the last semester of a Master’s in Business where I should have the opportunity to focus a thesis and potentially pursue an entrepreneurial startup as part of my program. My goal is to leverage that time and those resources to work on something that could potentially be useful to the EA (and wider impact oriented) community and would love to hear more about specific needs/gaps you’d like to see addressed so that I could steer my projects accordingly.
I have a background in Industrial/Organizational Psychology, HR, Non-for-Profit Leadership and even (accidentally) had a business strategy role that involved lots of research on bridging skills gaps and developing talent pipelines.Currently I’m drafting several proposals for projects/ventures that are relevant to the careers/talent space and hope to have a small portfolio of actionable plans by the end of August.
I know that Lee McC is also working on a project in this space with Nonlinear. I’d love to get in touch with anyone else working on the subject to get an overview of the state of affairs, goals and strategy for talent management in EA
Interested and posting so I’ll receive notifications if there’s any change here—not available for work until January though, so I’ll be reaching out later :)
Interested and posting so I’ll receive notifications of any change to the post :)
But not available for work until January
Location: Open to Relocation (will be in Berlin through Dec 22)
Remote: Prefer in-person / Flex / Hybrid
Willing to relocate: Yes
Skills:
HR / People Management
Org/Job Analysis
Recruitment
Performance Development
People Management
Problem Resolution
Project Management
(Happy to elaborate if interested, but not expanding here to save Forum real-estate)
Small Organization Leadership
Strategy / Analysis
Public Speaking
Standard Productivity Software + Adobe Photoshop/Premiere
Résumé/CV/LinkedIn:
Email:
Please reach out via LinkedIn or via DM on here (prefer not to publicly share contact information)
Notes:
Interested in Consulting, Operations, Strategy, and more—happy to provide references :)
Available from and until:
January 2023 - ?
I actually spent about half the past weekend at an EA retreat near San Francisco trying to communicate these exact concerns—super refreshing and also kind of baffling to see such a well developed, detailed post so in line with the case I was making!
I am considering writing a post about potential projects to improve this area as well as lessons that could be learned from other movements who have experience in adjusting messaging of complex ideas to be most beneficial to wider audiences—namely in Science Communication.
Here’s a brief list of ideas:
Please keep in mind that this is a casual brainstorm, and that there are probably plenty of valid points of contention/concern that could be raised with the various items. Let’s try to be constructive and productive >>>>> dismissive/hyper-critical :) *
-More resources to the Community Health Team
-Gathering a small, diverse team of highly skilled and well aligned Branding/PR/Public Communication Specialists and funding them as a dedicated “EA PR” / “Brand Management” / “Partnerships and Community Outreach” group either within the CEA or as a stand-alone entity
-Rather than doing the above point internally, bringing in outside specialists (who are still generally aligned with the goals of trying to do good) to advise and consult with management at EA orgs
-Fostering partnerships with groups, organizations and communities that are strategically aligned along at least some shared values (Scientific Skeptics movement, Science Communication field, Teachers associations, community advocacy groups in target impact areas, etc)
-Spinning off more accessible (shorter, more concise, with a focus on production quality) versions of current communications projects:
-Current long form Podcasts → Shorter, professionally done, narratively compelling podcasts a la “Planet Money”, “Short Wave” or “Unexplainable”
-Current Forum Posts → Refined into articles of a more reasonable length with a coherent tone and style
-Current Books → key ideas blurbs synthesizing main take-aways into self-contained articles/summaries
-Current Web-Sites/Articles → Front pages with more refined, concise introductions to a subject that subsequently connect to the more in depth articles (currently most EA org web pages are very long and dense)
-Diversifying the EA YouTube presence as well as on other platforms/media
-Identifying EAs who have a good track record of positive public communication and connecting them with external media rather than relying on the same few well-known figures to do the majority of such communications (podcasts, news journals, influencers, etc)
-Developing an “EA Onboarding” toolkit both to help people in EA better refine their approach to sharing EA with others as well as to help people outside of EA familiarize themselves with the movement in an approachable way (a Design Thinking approach would be very beneficial here)
-Changing EA messaging among EA orgs and EA leaders to be more digestible, accurate, concise and respectful to audiences outside of the hardcore moral philosophy world → of particular note, emphasizing that EA knows it can’t be sure about any one approach, which is why we tend to divide funding and focus among many different possible paths to impact (longtermism and AGI get WAY more visibility/discussion in EA communications than is representative of their share of the community—and they’re especially controversial)
-Opening up EA org boards to diverse members outside of the EA movement to provide a “reality check” as well as much needed perspectives. Organizational boards should make an especially concerted effort to ensure that members of the communities EA is trying to help are strongly represented
-Sponsoring several “spin-off” brands aligned with but distinct from EA that use a different names and communication styles specifically designed to synergize with strategically important groups of people currently outside EA. Not every important group responds well to the standard approach of long, detailed lectures on moral philosophy—market segmentation and targeted communications are vital for an effective communication strategy
-Updating current Recruiting practices in EA orgs to ensure candidates with diverse approaches to EA are not being screened out → full EA value and concept alignment may be important in some roles, but may actually be detrimental in others—particularly when entire teams are ideologically/stylistically homogenous
-Plenty more I’m sure!
Would this be something people would like to see expanded upon / collaborate on to create some movement on the subject?
*Edit: I accidentally hit Save before I was finished, went back to finish
*I started writing this the week after your reply but went down too deep of a rabbit hole and didn’t get around to finishing it. Apologies for the delay! Note, the first portion was written 3 months ago (Novemberish 2023) and the latter portion was written today (12 Feb 2024)
Preamble
Ok—I’ve had a bit more time to read through some of your writing and some of the comments to give myself a little context and hopefully I can contribute a bit more meaningfully now.
Before getting into details though, probably best to frame things:
My initial comment was solely aimed at responding to your original comment in this thread in a relative vacuum without having read through the paper or summary. Now that I’ve read the summary you shared[1], I imagine that we could have a much longer discussion on quite a few different points where we may productively disagree → however, to keep things concise I’ll be focusing the discussion here on this specific line from your comment above: “My suspicion is that [no article of a similar style arguing against EA principles] can exist because there’s no reasonable way to make such an argument; insinuation and “political” critique is all that the critics have got”
As I had not, at the time of my original comment, read further I was not aware of your definition of “insinuation” and “political critique” → now, having read more, it would probably be helpful to clearly share those definitions, as I understand them from you writing, here. (If I’ve misunderstood, please let me know!)
Insinuation: any critical, disparaging, or otherwise negative commentary that is made without significant explanation, evidence, reasoning, good-faith argumentation, or further context.
Political Critique: criticism that focuses not upon principles, but rather on practical, real-world matters. [2]
While I have personally engaged with people who have presented many critiques of Effective Altruism, I’ve never approached trying to assess criticism systematically and most of my familiarity with critiques of EA comes from undocumented, anecdotal encounters. I also don’t regularly read or subscribe to many of the various media wherein formalized criticism of EA might be most common, so I’m not very familiar with whatever existing body of external criticism there is[3]. It is probably worth while to distinguish which kinds of criticism we want to address:
Formal critiques: Pieces of criticism that are documented and were made with at least a reasonable degree of intentionality, thought, and a clear purpose of arguing against some aspect of/associated with EA. Examples may include academic and non-academic articles, in-depth blog posts, podcasts, pieces of journalism, formal debates, books, etc. But probably better to not consider idle social media commentary, one-sided ranting in informal settings, or casual anecdotal conversation
External critiques: Pieces of criticism that come from sources that don’t identify as part of the EA movement. While there is plenty of criticism shared by and among people under the EA umbrella, I posit that external criticism provides some unique value as it seems more likely to represent ‘public opinion’, to consider factors that may be neglected within EA, to propose different ways of thinking than those commonly used within EA, and to be less biased by various ‘in-group’ effects
^I hope this sounds reasonable—if you’d like to modify any points please let me know :)
On another note, at some point (time permitting) I would love to flesh out a more comprehensive post synthesizing and summarizing criticism of EA in a more rigorous, systematic and thoughtful way. However, a project like that seems like it would take quite a bit of work and collaboration, so I’m not too optimistic I’ll be able to take it on personally (at least not in the near future) :(
Examples of (semi-)Formal Criticism
Here I’ve tried to collect an incomplete list of several critiques of EA and tried to sort them by my best guess of where they fall along several relevant criteria
External Critiques
“Principles” Based Critiques
Philosophical critiques of effective altruism *I was disappointed to see that this article doesn’t actually summarize, synthesize, or propose philosophical critiques, rather it is responding to them in what seem to me like a rather uncharitable way :/ - but still, the works that it references might be helpful
Against ‘Effective Altruism’
Shackling the Poor, or Effective Altruism: A Critique of the Philosophical Foundation of Effective Altruism mostly focuses on Peter Singer and the Shallow Pond analogy
Effective Altruism and Extreme Poverty *In the end winds up supporting EA, but provides some moderate critiques throughout
Effective Altruism and the Altruistic Repugnant Conclusion
Effective Altruism: Pro and Contra
The Trouble With Algorithmic Ethics
“Political” Critiques
Effective altruism and the dark side of entrepreneurship
Better vaguely right than precisely wrong in effective altruism: the problem of marginalism
Doing good badly? Philosophical issues related to effective altruism
The Effective Altruist’s Political Problem
Should Solidarity Replace Charity?: Critiquing Effective Altruism and Considering Mutual Aid as an Alternative
Effective Altruism Promises to Do Good Better. These Women Say It Has a Toxic Culture Of Sexual Harassment and Abuse
The Other Half of Effective Altruism Selective Asceticism *explicitly defends the core principles of EA, so purely a “Political” critique
How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI
The AI industry turns against its favorite philosophy
Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’
How Silicon Valley doomers are shaping Rishi Sunak’s AI plans
Fraud, Lies, Exploitation and Eugenic Fantasies
OpenAI’s crackup is another black eye for effective altruism
Sam Bankman-Fried and the Moral Emptiness of Effective Altruism
How effective altruists ignored risk
*Not a critique on it’s own, but representative of a common critique about EA being ‘Cult-Like’, but recently when Googling Effective Altruism so I could get to the forum the top suggested autocompletes were “Effective Altruism Cult” and “Effective Altruism Secte” (interesting it gave me the French for some reason)
Mixed or other
The Good it Promises, the Harm it Does: Critical Essays on Effective Altruism
Beyond Moral Efficiency: Effective Altruism and Theorizing about Effectiveness
The Ethical, Political and Economic Challenges of Effective Altruism
Elite Universities Gave Us Effective Altruism, the Dumbest Idea of the Century
Why effective altruism is not effective
How effective altruism let Sam Bankman-Fried happen
Effective altruism’s most controversial idea (on Longtermism)
Why ‘longtermism’ isn’t ethically sound
Understanding “longtermism”: Why this suddenly influential philosophy is so toxic
Why Effective Altruism and “Longtermism” Are Toxic Ideologies
Effective Altruism: Philosophical Issues, not necessarily a critique in and of itself, though it does contain several articles that are more critical than others
Charity vs. Revolution: Effective Altruism and the Systemic Change Objection focuses specifically on the systemic vs marginal change debate, wasn’t sure to classify it as “Principle” or “Political”
Effective Altruism and its Critics *I don’t have free access to the full article, but from the abstract this looks to not focus on levying any critiques itself, rather listing some of the critiques the author found
Effective Altruism and Collective Obligations *I don’t have free access to the article :/
Internal Critiques
“Principles” Based Critiques
EA Forum tag: Criticism of effective altruism
“Political” Critiques
EA Forum tag: Criticism of effective altruism culture
EA Forum tag: Criticism of effective altruism causes
EA Forum tag: Criticism of work in effective altruism
EA Forum tag: Criticism of effective altruist organizations
Mixed or other
EA Forum tag: Criticism of longtermism and existential risk studies
EA Forum tag: Criticism and Red Teaming Contest
Winners of the EA Criticism and Red Teaming Contest
Some useful links in the comments as well:
from bruce: submissions worth highlighting that didn’t make the cut
from xuan: personal highlights from a non-consequentialist, left-leaning panelist
What I learned from the criticism contest
Concerns about Narrow Goal-Posts and dismissing ‘Political’ Criticism
“As an academic, I think we should assess claims primarily on their epistemic merits, not their practical consequences.” from page 33 of your paper → from a purely academic philosophical perspective I can understand this claim if the word ‘epistemic’ was replaced with a term like ‘ethical, logical, or philosophical’ as the basic tenants of EA are pretty defensible on paper. However, the word ‘epistemic’ relates to knowledge, and generally considers evidence alongside logic. To ignore ‘practical consequences’ would be to ignore a large body of evidence that may help to inform our perspective on EA’s merits. Of course, there are many confounding variables that abstract the relationship between the core philosophical tenants of EA and the ‘practical consequences’ of EA that should lead us to think carefully before updating our perspective of EA’s merits based on any one given piece of real-world evidence. However, to deprioritize practical consequences entirely seems like it would lead us to miss out on some key considerations.
Let’s imagine that EA’s core ideas are applied in many different scenarios and that, separately a randomized sample of main-stream ethical frameworks are applied in those same scenarios. If we started to observe that after a statistically robust amount of trials that the EA-applied scenarios led to worse outcomes on average than the other group, it would certainly lead me to question the epistemic merits of EA’s core claims. While this level of experimental rigor would be impractical, I believe a naturalistic observation comparing the successes and failings of EA vs equivalent non-EA frameworks would be a reasonable proxy for modestly bolstering or weakening (updating) my perception of the merits of EA’s core tenants.
Additionally, given the focus within Effective Altruism on applied ethics, which is a highlighted in the title’s usage of the word “Effective”, it seems to me that one of the core claims is that it is important to examine practical consequences when evaluating how good or bad an idea is. To assess the merit of EA’s core ideas purely on non-‘political’ critique seems to run counter to those very core ideas. In fact, I would imagine that a good-faith interpretation of EA’s core principles would lead one to rigorously assess all kinds of critiques, philosophical as well as political, to constantly update our beliefs and actions.
Circling back to your paper, on page 33 & 34, you continue
Personally, I don’t find this argument particularly compelling as 1) it lumps all political opponents of EA into one group, 2) makes a very large claim with no supporting evidence and 3) the hypothetical ‘political’ wrongness of the critics doesn’t affect the hypothetical ‘political’ wrongness of EA (seems like a form of ‘What About-ism’[4]). Of course, I’m sure that you have many more perfectly legitimate arguments for why we shouldn’t place an undue amount of credence in political critiques, but that’s a debate I would like to see more fleshed out than the attention it has been afforded in this discussion thus far before I am convinced.
Side note, JerL’s comment on your Substack Post raises some points I find compelling :)
Concerns about how we approach engaging with Criticism of EA
I posit that, people in the EA space should be more receptive to criticism from outside of EA, even if it is flawed by EA standards for several reasons:
People in EA, even those who have trained in ‘good epistemics’, are still susceptible to any number of biases that could lead us to under-value external critique and over-value things that confirm our views
Engaging in good faith with diverse critiques of EA aligns with several of the core values of EA
The way people in the EA space behave in response to criticism can have an impact → responding to criticism with an openness and empathy is likely to lead to better outcomes for EA
Regardless of how ‘correct’ or not EA’s principles are, the way that people in the EA orbit absorb, assess, and respond to criticism is important can have real consequences. I have noticed a trend both on the EA forum, as well as in discussions with people from EA aligned organizations, at EAGs and other EA events, that most popular responses to external criticism of EA tend to be highly dismissive and focus more on tearing down the arguments of the critic rather than making a good faith effort to engage with the underlying sentiment and intention of the critic.
EA, as you have cited, places a very high value on self-critique and has invested in a significant amount of diverse initiatives to promote such critique, such as the red-teaming contest. However, such criticism suffers from a huge blind spot as people who are already associated with EA enough to participate in that type of critique are a severely biased sample.
It can often seem like critiques of EA from people outside the EA space are only taken seriously by EAs if those critiques mold themselves to meet the specific criteria, argumentative formulations, and style preferred by people within the EA space. If that is the case (it could just be my personal perception!), then we risk missing out on the diverse perspectives of the vast majority of people who are not inclined to communicate their perspectives in an ‘EA way’.
A portion of EA thought emphasizes the value of worldview diversification[5], in large part because there’s been a significant amount of research on the practical value-add of diversity (though the evidence is much more nuanced than is often portrayed in common discussion)[6]. Part of worldview diversification includes engaging with style of argument that do not align with our own, as well as engaging with arguments coming from people with beliefs and backgrounds very different to our own. A very well intentioned person who isn’t comfortable speaking in academic jargon or assembling logical arguments to a forensic standard may still have great points, and we would benefit to engage with those points.
Beyond the potential epistemic benefits of engaging with external critique, the way in which we engage with critique has an impact in and of itself. If the EAs most popular reactions to external criticism of EA are negative, dismissive, patronizing, or just generally don’t attempt to meet the critic where they are, then we may only serve to perpetuate negative impressions of EA and create a chilling effect on dissent within the EA space.
I’m not sure if pro-EA responses to critiques of EA get more upvotes and agrees and karma than critical-of-EA responses on the forum, but it seems plausible that might be the case. I’m also not present enough on X or any other social media platforms to see what the average response of EAs to criticism looks like, it could be very respectful and well received! But it isn’t hard to imagine that some responses by some EAs to criticism might dismissive, come across as ‘elitist’, or are at least somewhat alienating to the non-EAs who see the responses. Regardless, such responses are bound to have at least a modest effect on the EA ‘brand’ and I would hope that we err on the side of engaging in good-faith, empathetic, personable responses when reasonable. (If the majority of EA responses to external criticism already are like that, great, let’s keep it up! If they aren’t, that’s unfortunate)
To try to get some sense of how this dynamic plays out (at least on the EA Forum) I spent some time looking through the EA Forum for external and internal critiques of EA and luckily @JWS shared this list collecting some criticism of EA criticism. As a little exercise, reading through the pieces JWS linked and the comments below a couple things popped out to me:
There are drastically more entries under the topic tag “Criticism of Effective Altruism” that are written by, and for, EAs than there are entries that engage with external criticism of EA
Of the entries that do engage with external criticism of EA, several simply share the original critiques to open discussion and several counter-criticize the criticism, but I haven’t found any posts that agree with, or claim to have updated their thoughts based upon, external critiques → my assumption would be that people on the EA forum are on average more motivated to refute external criticism than to engage with it or empathize with it.
There are quite a few critiques of EA that aren’t even mentioned anywhere on the EA forum—can’t be sure why this is, it is plausible that this confirms the point above
It’s hard to find external critiques of EA on the forum...
One last note
I really appreciate you engaging on this so openly! Really respect your ideas and everything you bring to the table :)
Apologies of any of my counter-arguments misunderstood your original points or don’t seem fair, I’m sure I’m off base in a few places and am happy to update
Unfortunately I don’t have the time to make it through the full paper right now :( I’m sure you share a lot of very valuable arguments therein
In my limited understanding, the distinction between “Political” vs “Principle” critique is similar to the distinction between a “Consequentialist” vs “Deontological” approach whereby “Political” criticism refers to how things have actually played out in the real world and “Principle”-based criticism refers to how good the actual underlying ideas are
I’m much more familiar with internal criticism shared on the EA Forum, during EA events, etc.
https://en.wikipedia.org/wiki/Whataboutism
Example from Open Philanthropy: https://www.openphilanthropy.org/research/worldview-diversification/
A couple relevant studies:
https://pubmed.ncbi.nlm.nih.gov/30765101/
https://journals.sagepub.com/doi/10.1177/0149206307308587