“EA is very open to some kinds of critique and very not open to others” and “Why do critical EAs have to use pseudonyms?”
Preamble
This is an extract from a post called “Doing EA Better”, which argued that EA’s new-found power and influence obligates us to solve our movement’s significant problems with respect to epistemics, rigour, expertise, governance, and power.
We are splitting DEAB up into a sequence to facilitate object-level discussion.
Each post will include the relevant parts of the list of suggested reforms. There isn’t a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.
Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.
EA is very open to some kinds of critique and very not open to others
Summary: EA is very open to shallow critiques, but not deep critiques. Shallow critiques are small technical adjustments written in ingroup language, whereas deep critiques hint at the need for significant change, criticise prominent figures or their ideas, and can suggest outgroup membership. This means EA is very good at optimising along a very narrow and not necessarily optimal path.
EA prides itself on its openness to criticism, and in many areas this is entirely justified. However, willingness to engage with critique varies widely depending on the type of critique being made, and powerful structures exist within the community that reduce the likelihood that people will speak up and be heard.
Within EA, criticism is acceptable, even encouraged, if it lies within particular boundaries, and when it is expressed in suitable terms. Here we distinguish informally between “shallow critiques” and “deep critiques”.[16]
Shallow critiques are often:
Technical adjustments to generally-accepted structures
“We should rate intervention X 12% higher than we currently do.”
Changes of emphasis or minor structural/methodological adjustments
Easily conceptualised as “optimising” “updates” rather than cognitively difficult qualitative switches
Written in EA-language and sprinkled liberally with EA buzzwords
Not critical of capitalism
Whereas deep critiques are often:
Suggestive that one or more of the fundamental ways we do things are wrong
i.e. are critical of EA orthodoxy
Thereby implying that people may have invested considerable amounts of time/effort/identity in something when they perhaps shouldn’t have[17]
Critical of prominent or powerful figures within EA
Written in a way suggestive of outgroup membership
And thus much more likely to be read as hostile and/or received with hostility
Political
Or more precisely: of a different politics to the broadly liberal[18]-technocratic approach popular in EA
EA is very open to shallow critiques, which is something we absolutely love about the movement. As a community, however, we remain remarkably resistant to deep critiques. The distinction is likely present in most epistemic communities, but EA appears to have a particularly large problem. Again, there will be exceptions, but the trend is clear.
The problem is illustrated well by the example of an entry to the recent Red-Teaming Contest: “The Effective Altruism movement is not above conflicts of interest”. It warned us of the political and ethical risks associated with taking money from cryptocurrency billionaires like Sam Bankman-Fried, and suggested that EA has a serious blind spot when it comes to (financial) conflicts of interest.[19]
The article (which did not win anything in the contest) was written under a pseudonym, as the author feared that making such a critique publicly would incur a risk of repercussions to their career. A related comment provided several well-evidenced reasons to be morally and pragmatically wary of Bankman-Fried, got downvoted heavily, and was eventually deleted by its author.
Elsewhere, critical EAs report[20] having to develop specific rhetorical strategies to be taken seriously. Making deep critiques or contradicting orthodox positions outright gets you labelled as a “non-value-aligned” individual with “poor epistemics”, so you need to pretend to be extremely deferential and/or stupid and ask questions in such a way that critiques are raised without actually being stated.[21]
At the very least, critics have learned to watch their tone at all costs, and provide a constant stream of unnecessary caveats and reassurances in order to not be labelled “emotional” or “overconfident”.
These are not good signs.
Why do critical EAs have to use pseudonyms?
Summary: Working in EA usually involves receiving money from a small number of densely connected funding bodies/individuals. Contextual evidence is strongly suggestive that raising deep critiques will drastically reduce one’s odds of being funded, so many important projects and criticisms are lost to the community.
There are several reasons people may not want to publicly make deep critiques, but the one that has been most impactful in our experience has been the role of funding.[22]
EA work generally relies on funding from EA sources: we need to pay the bills, and the kinds of work EA values are often very difficult to fund via non-EA sources. Open Philanthropy, and previously FTX, has/had an almost hegemonic funding role in many areas of existential risk reduction, as well as several other domains. This makes EA funding organisations and even individual grantmakers extremely powerful.
Prominent funders have said that they value moderation and pluralism, and thus people (like the writers of this post) should feel comfortable sharing their real views when they apply for funding, no matter how critical they are of orthodoxy.
This is admirable, and we are sure that they are being truthful about their beliefs. Regardless, it is difficult to trust that the promise will be kept when one, for instance:
Observes the types of projects (and people) that succeed (or fail) at acquiring funding
i.e. few, if any, deep critiques or otherwise heterodox/“heretical” works
Looks into the backgrounds of grantmakers and sees how they appear to have very similar backgrounds and opinions (i.e they are highly orthodox)
Experiences the generally claustrophobic epistemic atmosphere of EA
Hears of people facing (soft) censorship from their superiors because they wrote deep critiques of the ideas of prominent EAs
Zoe Cremer and Luke Kemp lost “sleep, time, friends, collaborators, and mentors” as a result of writing Democratising Risk, a paper which was critical of some EA approaches to existential risk.[23] Multiple senior figures in the field attempted to prevent the paper from being published, largely out of fear that it would offend powerful funders. This saga caused significant conflict within CSER throughout much of 2021.
Sees the revolving door and close social connections between key donors and main scholars in the field
Witnesses grantmakers dismiss scientific work on the grounds that the people doing it are insufficiently value-aligned
If this is what is said in public (which we have witnessed multiple times), what is said in private?
Etc.
Thus, it is reasonable to conclude that if you want to get funding from an EA body, you must not only try to propose a good project, but one that could not be interpreted as insufficiently “value-aligned”, however the grantmakers might define it. If you have an idea for a project that seems very important, but could be read as a “deep critique”, it is rational for you to put it aside.
The risk to one’s career is especially important given the centralisation of funding bodies as well as the dense internal social network of EA’s upper echelons.[24]
Given this level of clustering, it is reasonable to believe that if you admit to holding heretical views on your funding application, word will spread, and thus you will quite possibly never be funded by any other funder in the EA space, never mind any other consequences (e.g. gatekeeping of EA events/spaces) you might face. For a sizeable portion of EAs, the community forms a very large segment of one’s career trajectory, social life, and identity; not things to be risked easily.[25] For most, the only robust strategy is to keep your mouth shut.[26]
Grantmakers: You are missing out on exciting, high potential impact projects due to these processes. When the stakes are as high as they are, verbal assurances are unfortunately insufficient. The problems are structural, so the solutions must be structural as well.
Suggested reforms
Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.
Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.
Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!
Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.
Critique
General
EAs must be more willing to make deep critiques, both in private and in public
You are not alone, you are not crazy!
There is a much greater diversity of opinion in this community than you might think
Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!
EA must be open to deep critiques as well as shallow critiques
We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique
When we reject critiques, we should present our reasons for doing so
EAs should read more deep critiques of EA, especially external ones
EA should cut down its overall level of tone/language policing
Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”
Civility must not be confused with EA ingroup signalling
Norms must be enforced consistently, applying to senior EAs just as much as newcomers
EAs should make a conscious effort to avoid (subconsciously/inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism
Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about
“If we are so open to critique, shouldn’t we be open to this one?”
EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them
EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content
EAs, especially those in community-building roles, should send credible/costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community
EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity
EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations
EA institutions and community groups should run discussion groups and/or event programmes on how to do EA better
Institutions
Employees of EA organisations should not be pressured by their superiors to not publish critical work
Funding bodies should enthusiastically fund deep critiques and other heterodox/“heretical” work
EA institutions should commission or be willing to fund large numbers of zero-trust investigations by domain-experts, especially into the components of EA orthodoxy
EA should set up a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions [within 12 months]*
This body should be run by independent people and funded by its own donations, with a “floor” proportional to other EA funding decisions (e.g. at least one researcher/community manager/grant program, admin fees in a certain height)
If this foundation is established, EA institutions should cooperate with it
EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques
EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:
Every year invite all EAs to raise any worries they have about EA central organisations
These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees
Establish voting mechanism, e.g. upvotes on worries that seem most pressing
Red Teams
EA institutions should establish clear mechanisms for feeding the results of red-teaming into decision-making processes within 6 months
Red teams should be paid, composed of people with a variety of views, and former- or non-EAs should be actively recruited for red-teaming
Interesting critiques often come from dissidents/exiles who left EA in disappointment or were pushed out due to their heterodox/”heretical” views (yes, this category includes a couple of us)
The judging panels of criticism contests should include people with a wide variety of views, including heterodox/”heretical” views
EA should use criticism contests as one tool among many, particularly well-suited to eliciting highly specific shallow critiques
Epistemics
General
EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)
EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics
As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness
EAs should practise epistemic modesty
We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community
We should avoid assuming that EA/Rationalist ways of thinking are the only or best ways
We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/assumptions/characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)
We should stop assuming that we are smarter than everybody else
When EAs say “value-aligned”, we should be clear about what we mean
Aligned with what values in particular?
We should avoid conflating the possession of the general goal of “doing the most good” with subscription to the full package of orthodox views
EAs should consciously separate:
An individual’s suitability for a particular project, job, or role
Their expertise and skill in the relevant area(s)
The degree to which they are perceived to be “highly intelligent”
Their perceived level of value-alignment with EA orthodoxy
Their seniority within the EA community
Their personal wealth and/or power
EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views
The EA Forum should have its karma/commenting system reworked to remove structural forces towards groupthink within 3 months. Suggested specific reforms include, in gently descending order of credence:
Each user should have equal voting weight
Separate agreement karma should be implemented for posts as well as comments
A “sort by controversial” option should be implemented
Low-karma comments should not be hidden
Low-karma comments should be occasionally shunted to the top
EA should embark on a large-scale exploration of “theories of change”: what are they, how do other communities conceptualise and use them, and what constitutes a “good” one? This could include:*
Debates
Lectures from domain-experts
Panel discussions
Series of forum posts
Hosting of experts by EA institutions
Competitions
EAG framed around these questions
Etc.
When EA organisations commission research on a given question, they should publicly pre-register their responses to a range of possible conclusions
Ways of Knowing
EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves?
EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
History is full of people who thought they were very rational saying very silly and/or unpleasant things: let’s make sure that doesn’t include us
EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews
Diversity
EA institutions should select for diversity
With respect to:
Hiring (especially grantmakers and other positions of power)
Funding sources and recipients
Community outreach/recruitment
Along lines of:
Academic discipline
Educational & professional background
Personal background (class, race, nationality, gender, etc.)
Philosophical and political beliefs
Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse
Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
People with heterodox/“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view
Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made
EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time
There should be a survey on cognitive/intellectual diversity within EA
EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA
Openness
Most challenges, competitions, and calls for contributions (e.g. cause area exploration prizes) should be posted where people not directly involved within EA are likely to see them (e.g. Facebook groups of people interested in charities, academic mailing lists, etc.)
Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
Subject-matter experts from outside EA
Researchers, practitioners, and stakeholders from outside of our elite communities
For instance, we need a far greater input from people from Indigenous communities and the Global South
External speakers/academics who disagree with EA should be invited give keynotes and talks, and to participate in debates with prominent EAs
EAs should make a conscious effort to seek out and listen to the views of non-EA thinkers
Not just to respond!
EAs should remember that EA covers one very small part of the huge body of human knowledge, and that the vast majority of interesting and useful insights about the world have and will come from outside of EA
Funding & Employment
Grantmaking
Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/”heretical” views
Funding frameworks should be reoriented towards using the “right tool for the right job”
Optimisation appears entirely appropriate in well-understood, predictable domains, e.g. public health interventions against epidemic diseases[80]
But robustness is far superior when addressing domains of deep uncertainty, areas of high complexity, low-probability high-impact events, long timescales, poorly-defined phenomena, and significant expert disagreement, e.g. existential risk
Optimising actions should be taken on the basis of high-quality evidence, e.g. meta-reviews or structured expert elicitations, rather than being used as the default or even the only mode of operation
Grantmaking organisations should commission independent external evaluations of the efficacy of their work (e.g. the success rates of grantmakers in forecasting the impact or success of projects) within 6 months, and release the results of any internal work they have done to this end
Within 5 years, EA funding decisions should be made collectively
First set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms
For example rotating panels, various forms of lottocracy
Subject matter experts are always used and weighed appropriately
Experiment in parallel with randomly selected samples of EAs evaluating the decisions of one existing funding committee
Existing decision-mechanisms are thus ‘passed through’ an accountability layer
All decision mechanisms should have a deliberation phase (arguments are collected and weighed publicly) and a voting phase (majority voting, quadratic voting, etc.)
Depending on the cause area and the type of choice, either fewer (experts + randomised sample of EAs) or more people (any EA or beyond) should take part in the funding decision
A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals*
The outcomes of this process should be evaluated in comparison to EA’s standard grantmaking methods as well as other alternatives
Grantmaking should require detailed and comprehensive conflict of interest reporting
Employment
More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent “independent researchers”
EA funders should explore the possibility of funding more stable, safe, and permanent positions, such as professorships
Contact Us
If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me
The paradox of open-mindedness
We want to be open-minded, but not so open-minded that our brains fall out. So we should be open to high-quality critiques, but not waste our time on low quality ones. My general worry with this post is that it doesn’t distinguish between the two. There seems a background assumption that EAs dismiss anti-capitalist or post-colonial critiques because we’re just closed-minded, rather than because those critiques are bad. I’m not so sure that you can just assume this!
Doing EA Lefter?
Another general worry I have about “Doing EA Better”, and perhaps especially this post, is the extent to which it seems to be implicitly pushing an agenda of “be more generically leftist, and less analytical”. If my impression here is mistaken, feel free to clarify this (and maybe add more political diversity to your list of recommended “deep critiques”—should we be as open to Hanania’s “anti-woke” stuff as to Crary et al?).
Insofar as the general message is, in effect, “think in ways that are less distinctive of EA”, whether this is good or bad advice will obviously depend on whether EA-style thinking is better or worse than the alternatives. Presumably most of us are here because we think it’s better. So that makes “be less distinctively EA” a hard sell, especially without firm evidence that the alternatives are better.
Some of this feels to me like, “Stop being you! Be this other person instead.” I don’t like this advice at all.
I wonder if it’s possible to separate out some of the more neutral advice/suggestions from the distracting “stop thinking in traditional analytic style” advice?
Yeah I mean your intuition isn’t wrong, one of the parts is literally “ways of knowing” and links to a mid-tier breadtuber. It’s this weird left-wing envy I don’t get in EA. If we want to reduce conflict and infighting I don’t understand why looking towards the Left of all places. This portion of the OP is the worst written but I feel like EAs upvote it and say they like it because it reads of epistemically virtuous to be open to it. Also the authors are a bit dishonest as last time they received the partisan criticism they just pretended it was not left-leaning at all[1].
Honestly, I wanted to write a left-wing critique[2] but reading the ConcernedEAs stuff made me realise it’d just get lumped in and also bad faith actors just use criticism as a cudgel. I also don’t understand the deep criticism argument co-existing with the pseudonym argument because left-wing movements already exist and you can join them? You don’t need to stay in EA! You can also just split your time up?
https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1?commentId=mdkzyA7H82a5rvjrA#comments
I already wrote it and it’s about solidarity and the reactionary attitudes towards polyamory and circular firing squads in EA but I’m definitely not releasing it in the current climate. If anything I’m most tempted to write a post in the opposite direction against the Left at this point.
Well Rich, some of us here are leftists.
So of course, no, not all of us EAs dismiss these critiques, because a few us champion these critiques ourselves.
Ouch, this hurt. But I shall recover 😢. But suffice to say, libertarian EAs shouldn’t assume these critiques are bad either.
I will point out that to-date, all the major EA scandals have been caused by libertarians (cryptocurrency, race science, sexual abuse in polyarmorous community), and I do think the more reckless libertarians in EA have done more to hurt this movement than anyone else.
If EA became more left-wing, in my leftist opinion, it would be more “EA”, if you get what I mean.
Hmm this seems patently false to me?[1]. Am I misunderstanding something? If not, I’d appreciate it if people don’t assert false things on the forum.
SBF was a major Democratic donor with parents who are Democratic donors. I doubt he ever identified as libertarian. Among the biggest critiques of Bostrom’s academic views is that he seems too open to authoritarian survelliance (cf Vulnerable World Hypothesis), hardly a libertarian position. I don’t know which incidences of “sexual abuse in polyarmorous community” you’re referring to, but I suspect you’re wrong there too.
Hi Linch, sorry for the confusion. I that comment was not spceifically about certain people, and I never named SBF, Bostrom etc.
I was more referring to the general communities of people who are interested in those respective areas as being libertarians. Example, there are many EAs working in cryptocurrency and they tend to be libertarian. Many EAs have expressed interest in Race-IQ differences on the forum, not just Bostrom. Cryptocurrency, Race-IQ differences, and polyamory tend to be libertarian dominated areas of fascination.
I do believe SBF donated large sums to Republicans. And Bostrom’s views seem to accord well with right-libertarians like Peter Thiel. I bring this up because Thiel has not been shy of using surveillance, having founded Palantir. Bostrom was also a member of Extropians with known libertarian links.
But I don’t really want to be speculating on these specific individuals political views, but make the broader point that those areas of itnerest are assosciated with libertarians.
I think we’re maybe talking past each other. E.g. I would not classify Thiel’s political views as libertarian (I think he might have been at one point, but certainly not in the last 10+ years), and I’ll be surprised if the median American or libertarian would. Some specific points:
To be clear, the problem with SBF is that he stole billions of dollars. Theft is no less of a problem if it was in the traditional financial system.[1]
Notably, not to the Libertarian Party!
Seems pretty unfalsifiable to me. Also kinda irrelevant.
Seems like an unusual framing of “to-date, all the major EA scandals have been caused by libertarians.” Usually when I think (paraphrased)”X group of people caused Y” I don’t think “X group of people have areas of interests in the vicinity of Y.”
If anything, non-consensual redistribution is much more of a leftist thing than that of any other modern political strand?
I see where you’re coming form, but I do see libertarianism as the thread that unerpins all these scandals together.
Thiel has described himself as a conservative libertarian in the past, but yes his politics are more conservative overall now. But I make the point that surveillance/authoritarianism is not incompatible with libertarian view, and Bostrom was a an Extropian
SBF’s “problem” also includes his activities for cryptocurrency adoption, which if embraced, could have caused widespread problems in the financial system. And I want to stress, cryptocurrency scandals in EA have been broader than just SBF (e.g. Ben Delo, Avraham Eisenberg). I want to stress that the cryptocurrency scandal in EA is not just SBF, but more systematic.
This is a strange and unhelpful-seeming comment. Obviously nothing I wrote should be read as denying that EAs are politically diverse (generic references to “EAs” should always be read as implicitly preceded by the word “many”).
I’d like to see more folks from across the political spectrum be happily involved in EA.
Things I don’t like so much:*
Gratuitous disrespect, e.g. through deliberately mis-naming your interlocutors.
The apparent assumption than anyone not a leftist must be a libertarian. (Is Joe Biden a libertarian too?)
Employing guilt-by-association tactics, and trying to pick a fight about which subgroups are collectively the worst.
The latter is the worst offense, IMO, and illustrates precisely the kind of tribal/politicized thinking that I strongly hope is never accepted in EA. I’d much prefer a “big tent” where folks with different views respectfully offer object-level arguments to try to persuade each other to change their minds, rather than this kind of rhetorical sniping. (Seriously, what good do you imagine the latter will achieve?)
Note that my complaint about “Doing EA Lefter” is not that I’ve anything against people trying to argue for views further left than mine—by all means, feel free! My concern was that their recommendations seemed to be presupposing leftism, and brutely commanding others to agree, rather than providing object-level arguments that might persuade the rest of us.
* = (I guess I also think it’s bad form to create a burner account for the sole purpose of writing a comment with those other bad features.)
Sorry Richard, I meant no disrepsect. And I appreciate you acknowledging that there are leftsist EAs.
Without wanting to do guilt-by-association, I simply wanted to express that there would have been a clear benefit to having a more left-wing EA, since leftists are more critical of cryptocurrencies etc. There were many EAs who did the right thing warning about cryptocurrency/SBF, but they were smaller in number, and overlooked by the community. So apologies I went too far maligning all libertarians/non-leftists.
Thanks, I appreciate the clarification. (I agree that a general advantage of having a more diverse/”big tent” coalition is that different ppl/perspectives may be more or less likely to pick up on different potential problems.)
Hello AnonEALeftist—thanks for sharing your thoughts, and I’m sorry if you felt like you had to post anonymously because of being leftist.
I think what Richard is perhaps getting at here[1] is not to say that all leftist critiques of EA are bad, but instead that EAs have come across them and have considered them lacking, and that this DEAB section is trying to get EA to consider these ideas while not actually arguing for them on the object level first. You may find this unfair, and I think the (alleged) ideological clash between EA and the Left has been danced around a bit by the community. I’m very much in favour of more constructive debate between the Left and EA though, and I hope you fellow lefty EAs can help contribute to that :)
I don’t think this is fully below-the-belt, but I think libertarian EAs would push back that libertarianism would necessarily be related, or causally responsible, for these harms.[2]
I definitely get you mean, and I’d like to see the community explore it more in good faith. Are there any articles/resources that you think would be helpful for non-leftist EAs trying to explore this point of view? One thing I find fairly off-putting about some[3] leftist criticism is how relentlessly hostile it is. For example, I find it very difficult to see Crary’s criticism of EA as being in good faith, and I don’t think this is just because she’s not framing her arguments in EA language/terms, but even when EA is critical of the Left, I don’t think we call Leftism “a straightforward case of moral corruption”.
Or at least, one interpretation
Not really wanting to dive fully into this—but it’s somewhat analogous to being against all of EA because of SBF
But not all!
Thank you JWS. Really appreciate your comments.
I have seen some EA’s accuse certain critiques as bad faith where I found them the opposite, and have seen attacks on Leftists (e.g. leftmism would make EA less analytical in the above comment). So I think a lot of this is due to differences in worldview/perspective.
But I certainly agree that there are some critiques of EA that are genuinely poorly done.
In terms of critiques I like:
Kemp makes great points about EAs being captured by wealthy interests
https://renewal.org.uk/effective-altruism-longtermism-and-democracy-an-interview-with-dr-luke-kemp/
McGoey makes good points about EA culture, e.g. EAs generally being ignorant of the role the IMF/WTO have played in exacerbating global poverty
But also in terms of left wing EA support, Garrison Lovely, Rutger Bregman, & Habiba of 80K.
I do agree that some EAs have labelled certain critiques as ‘bad faith’ or ‘bad epistemics’ without backing it up with clear reasoning, I just think there hasn’t been much vitriol of the level Crary engages with in her article, and I think that can be a barrier to good-faith dialogue on both sides.
The Kemp piece looks really good! I’ve bookmarked it and will make sure to read. I’m aware of Garrison and Habiba but will look into what Rutger has said. Thanks for sharing these people and their perspectives, I think these are exactly the kind of perspectives that EA should be listening to and engaging with.
The McGoey piece seems (at first glance) like it’s a bit in between the two. EAs having a blindspot about the policies of the IMF/WTO (especially in the postwar 20th century and the ascendance of the “Washington Consensus”)[1] and how they may have harmed the world’s poorest people seems like a very valid critique that EAs could explore for sure. But the article subheading calls EA “the Dumbest Idea of the Century”. Now, of course, EA critiques shouldn’t have to obey Marquess of Queensberry rules in order to be listened to be EAs. But I think it’s probably a psychological fact that if a group of critics keeps calling your ideas some combination of “moral corruption”, “the dumbest idea”, “excuses for the rich” and “white supremacist/fascist”[2], then you’ll probably just stop responding to their work.
If any EAs want to look into this, I’d recommend starting with Globalization and Its Discontents, by noted leftie firebrand *checks notes* Joseph Stiglitz, Nobel laureate in economics and former Chief Economist of the World bank
Torres & Gebru especially deploy the rhetoric of the last 2
Glad to hear it.
I understand. I never take this stuff personally myself. I even think it’s more important to engage with criticism (provided you are headstrong for it—at that time and place) if it’s espescially disagreeable/ hostile.
I haven’t read Crary but it’s on my list. The headline for McGoey’s piece is quite harsh, but there’s no real nice way to say some of these things (e.g. “excuses for the rich” isn’t that much nicer from what Kemp says about EA being captured by billionaire interests). These critics sincerely hold these positions—whilst it’s head for us to hear—it wouldn’t be right for them to water down their criticisms either.
And ultimately, doesn’t EA deserve harsh criticism, with the spate of scandals that have emerged & emerging? If it’s ultimately good for EA in the end—bring it on! More critcism is good.
Dear authors—could you please provide at least one concrete example of a high-quality “deep critique” of Effective Altruism which you think was given inadequate consideration?
I’m not the author, but there was a very prescient critique submitted to the EA criticism contest, that went underappreciated. https://medium.com/@sven_rone/the-effective-altruism-movement-is-not-above-conflicts-of-interest-25f7125220a5
UPDATE: actually I realised did specifically mention this critique as an example.
Thanks! Now that SBF has been disavowed do you think EA still has a big problem with under-emphasising conflicts of interest?
I still think the best critiques benefit from being extremely concrete and that article could have had more impact if it spent less time on the high-level concept of “conflicts of interest” and more time explicitely saying “crypto is bad and it’s a problem that so many in the community don’t see this”
I felt the article was pretty concrete in saying exactly that,”crypto is bad …”. It didn’t strike me as high level/ abstract at all.
Just a note, but if you’re trying to facilitate object-level discussion it might be better not to drop this right as Ozzie dropped a very similar post? https://forum.effectivealtruism.org/posts/hAHNtAYLidmSJK7bs/who-is-uncomfortable-critiquing-who-around-ea
I appreciate the thought, but personally really don’t see this as a mistake on ConcernedEAs.
I actually pushed that post a few days back so that it wouldn’t conflict with Owen’s, trying to catch some narrow window when there aren’t any new scandals. (I’m trying not to overlap closely with scandals, mainly so it doesn’t seem like I’m directly addressing any scandal, and to not seem disrespectful).
I think if we all tried timing posts to be after both scandals and related posts, then we’d develop a long backlog of posts that would be annoying to manage.
I’m happy promoting norms where it’s totally fine to post things sooner than later.
It’s a bit frustrating that the Community frontpage section just shows 3 items, but that’s not any of our faults. (And I get why the EA Forum team did this)
Hah, fair point. I guess I just am hoping to drive more discussion about this type of thing on the forum and it is definitely frustrating to see how broken up conversations are.
I am also pushing to promote things quicker and not get delayed in drafting forever. I did that for a while and basically never posted anything—I wish more people would be willing to post things on the community side that aren’t extremely high quality and polished.
Can you suggest deep critiques of EA from an ideologically moderate or conservative standpoint?
I think Tyler’s Cowen’s critique might be the best in this space imo.
Thanks—I should have been clearer that I was asking the ConcernedEAs specifically. I’m trying to test the allegation that they are interested in only leftward critiques by asking for a different recommendation.
(trying to focus my comments on particular thing here, instead of a long comment[1] trying to cover everything. Also, I again want to thank the authors for breaking down the original post and continuing the object-level discussion)
EA and Leftist Philosophy: Détente or Decimation?
As noted by others, while this particular section of DEAB suggests that EA would be institutionally and epistemically improved by accepting more ‘deep critiques’, it does not suggest EA accept these critiques from any direction. I’d be very surprised if the authors thought that EA would be improved by seriously considering and updating on the thoughts of our right-wing critiques.[2]
In this post in particular, the relevant claims are:
EA is more likely to listen to a critique that is not critical of capitalism
The main approach to politics in EA is ‘liberal-technocratic’
It suggets “The Good it Promises, the Harm it Does” as an example of ‘deep critique’ that EA should engage with. Most, if not all perspectives in said book, criticise EA from an anti-capitalist perspecitve.
EA should “seriously engage with critiques of capitalist modernity”
EA funders should “enthusiastically fund deep critiques” (which I would assume take the form of the anticapitalist approaches above)
So where I agree with this line of thinking is that I think there is a strong ideological divide between modern leftism and current EA (because I think EA is an ideology is some respects, not just a question, and that’s ok). I think intellectual and public leftism is probably the largest source of criticism at the moment, and likely to be so in the future, and it is definitely worth EA investigating why that is, finding the key cruxes of disagreement, and making it clear where we think we can learn and where we reject leftist arguments and the key reasons why.
However, one of the reasons why I have ended up as an EA is that I find modern leftist epistemology and theodicy to be lacking. In disputes between, say, Hickel and Roser or Hickel and Smith, I’m not on the former’s side—I think his arguments (as far as I understand them) are lacking. The same go for anticapitalist critics who do post on the Forum.[3] I think this is because their arguments are bad, not because I’m pattern-matching or hiding behind ‘bad-epistemics’.
Take Crary’s “Against Effective Altruism”—an example I think ConcernedEAs would agree is a ‘deep critique’. Here’s an example of what she has to say about EA:
Firstly, In terms of the institutional reforms suggested, I can’t think of any movement than would give ‘enthusiastic funding’ to critics who call the movement morally corrupt. Secondly, I don’t think Crary really argues for rejecting EA in the piece. She frames the critiques as ‘institutional’, ‘philosophical’, and ‘composite’ - but doesn’t really argue for it that much. Plenty of other authors are mentioned and referenced, but the article seems to me to assume that the anticapitalist critiques are correct and proceeding from there. Finally, I don’t think there’s much middle ground to be had between the worldviews of Crary and, say, MacAskill or Singer. Indeed, she ends the article by saying that for EA to accept the critiques she believes in, EA would cease to exist. And here I do agree. What EA should do, in my opinion, is explain clearly and convincingly why Crary is completely wrong.
In conclusion, I do agree that there is a lot of value in exploring the leftist critiques of EA, and I think there has been good EA work to reach out to leftist critics.[4] But I think the ConcernedEAs authors who are strongly sympathetic to these leftist critiques have the responsibility to spark the object-level debates rather than suggesting they be adopted for meta-level concerns, and those in EA who disagree with them should provide good reasons for doing so, and not hide behind accusations of ‘bad epistemics’ or ‘wokeness run amok’.
Edit: It still became a long comment 😭 I’m trying my best ok!
I especially have in mind Richard Hanania’s recent critique, which I thoroughly disagreed with
I actually think that last post is really well-written, even if I do disagree with it
See this podcast from Garrison Lovely, and also this one by Rabbithole
I consider my recent critical post to be a deep criticism (it criticises an aspect of EA culture that is emotional/intimate and it could impact the relative statuses of people within EA including prominent figures) and I wrote it under pseudonym, so I’ll add my personal perspective. I don’t think this post captures why I wrote my post under pseudonym and I don’t think decentralising EA would have caused me to post it under my real name.
I’m also not sure exactly what message we should take from many people using pseudonyms. Here are some possibilities:
People are rightly worried about retaliation
People are wrongly worried about retaliation
The community is unusually open to criticism and will seriously consider criticisms made even under pseudonym, so people feel more comfortable using pseudonyms for various other reasons including personal ones
Other possibilities I haven’t thought of
There’s probably a combination but I don’t know how we could determine how much is of each. On a positive note I think most other communities would simply disregard deep criticisms made under pseudonym or would try to dox the authors of such pieces, which is not something I worry about here.
This doesn’t really seem like a criticism of EA to me, more of a community health suggestion. I think when ConcernedEAs are saying we’re not receptive to deep critiques, they’re claiming we’re willing to listen to people suggesting we spend 1% less on bednets and 1% more on vitamin supplements, but not people suggesting we should switch from working on AI alignment to ending wealth inequality.
Much of this might be true and I agree with most of the reforms—I hope many EA groups and orgs might be shifting slowly in those directions even while not necessarily announcing it. I especially like the stuff around epistemic modesty, understanding what we really mean by value alignment and some of the Karma reworking. Obviously the voting weight system has pros and cons in both directions, but I still do feel uncomfortable about a space where one person’s opinion is worth more than another
I would be interested though to hear if you know of another community which is more open to criticism, either shallow or deep than EA. I don’t have a wide spectrum of communities, but I can tell you in the NGO world and in specific fields of academia, my experience has been that people get far pricklier far faster than in the one EA group I was part of and on this forum. It is difficult for any group to truly gaze into its own abyss to acknowledge flaws and weaknesses. I still feel that even though we might struggle with deep self criticism, perhaps we are still better at it than others.
I don’t want to write this, but I wonder if it might be helpful for this post to lay out why being open to a broader set of critiques is important. I agree with this post for the record.