EA is bad for mental health (and I’m tired of pretending it’s not)

Summary

My perspective is that EA as a community, movement, and philosophy perpetuates ideas and environments that are harmful for mental health, putting EAs at a disproportionate risk of having poor individual mental health. I argue that EA is bad for mental health for systemic reasons[1].

In this post, I present:

  1. Risk factors for psychological harm that I believe to be predictable, neglected, and tractable to address on a shorter-term scale without drastic systemic change.

  2. Suggestions for how to mitigate those risk factors.

  3. What I see as fundamental incompatibilities between EA as an ideology and basic principles of mental health. (Problems that can’t be addressed while EA remains the same philosophy.)

Preamble

I attempt to describe risk factors as they are, and to propose solutions that are concrete and realistic. However, I am approaching this purely from a mental health background, without much consideration for impact or cost-effectiveness. I do this for two reasons:

  1. To emphasize what I believe is the nature of the problem (especially for readers who are not familiar with certain aspects of mental health); and

  2. To acknowledge the highly subjective nature of assessing the scale and impact of these problems and whether they warrant action. Many of these issues seem difficult or impossible to assess in scale/​impact, but I think that having messy and vibe-based conversations about these topics is a better starting point than nothing.

While I also highlight psychological harm that I believe to be facilitated by or even directly caused by EA, it’s worth remembering that not every instance of harm can be prevented, nor is every instance that can be prevented worth eliminating. I am not necessarily able to accurately distinguish these, and again my starting point is to raise awareness of these ideas and support my naive sentiment that we could do something to meaningfully address these if we deemed it important and decided to take action.

Risk factors and suggestions for mitigation

Introductory EA content promotes the harmful idea of maximization

I believe that the EA Handbook, introductory reading groups, and certain canonical EA books can easily give newcomers the impression that EA endorses maximization as an ideal. I believe that maximization is objectively unsustainable and unhealthy for most people[2], and that the glorification of maximization also contributes to imposter syndrome in the community, based on the idea that non-maximizers are somehow morally lacking and “not good enough” as human beings.

It seems plausible to me that only a small fraction of EAs identify as maximizers. (Well, either that or most of us here really are imposters.) Why does EA as a community promote maximization as canon if only a small number of us actually resonate with it? I believe our current inclusion of it is not purely academic. Imposter syndrome, shame, not feeling good enough, unhealthy comparison to unrealistic standards, and people needing to speak up against maximization: these are common and recurring themes in the community. There must be some reason for it, and even if it turns out maximization is a minor offender, I think these symptoms of poor mental health are worth thinking about and potentially addressing.

I also think it is easy to underestimate the impact of careless messaging about maximization. For example, many people are introduced to EA at an age where they’re impressionable: during their early career, university years, or even during high school. I was in an introductory EA virtual reading group not that long ago, and students in my group were feeling the need to justify why they weren’t doing more and aiming to have extraordinary impact, despite the fact that they were being exposed to completely new concepts while not having a career. Sure, the groups are designed to make us think critically, but I suspect that most graduates of these groups do not actually join the EA community[3], so they may have been left feeling a mix of inspiration and inadequacy that there is no further opportunity to positively shape. Perhaps we could improve psychological safety from the outset, across the spectrum of people who only stumble across an EA article one time, to highly engaged EAs who may still feel inadequate about their contributions in absence of effective messaging.

Suggestions: We could discuss maximization as a community and decide whether we really want to promote it in our messaging. If we decide otherwise, we could specifically normalize non-maximization by making a few high quality examples or recommendations with lifestyle balance and mental wellbeing in mind. The EA Handbook could be tweaked to reflect this. CEA could establish a recommendation for introductory online and in-person groups to promote or at least reference these concepts.

EA fosters conditions that trigger imposter syndrome

I suspect that maximization is just one of multiple contributors to the prevalence of imposter syndrome in the community. Imposter syndrome remains more or less an understudied and unsolved problem in psychology, but that doesn’t mean we can’t do anything to target it, even if experimentally. One simple idea is that we can talk very specifically about it, because imposter syndrome has at least one root in shame, and shame is dispelled by sharing about it. Heck, why can’t we make the most of it and make a study about imposter syndrome within EA? We might even discover some new insights about how to prevent or address it.

Suggestions: We could commission someone to make a high quality blog/​article about imposter syndrome and promote that on the EA Forum or in groups. We could also design an in-person workshop on imposter syndrome that local groups could aim to run themselves once a year. These workshops could also be run at EAG/​EAGx conferences. Someone could do an informal qualitative analysis of imposter syndrome among EAs as a project.

The EA Forum and other EA-adjacent online communities foster extreme perspectives

A lot of EA discourse happens asynchronously online. Especially on the EA Forum, EA-adjacent forums (such as LessWrong), and on various platforms of “EA+ influencers” (e.g. Astral Codex Ten). A common phenomenon with online forums is that they tend skew and amplify biases, normalize confrontational discussion norms, perpetuate misunderstandings, overrepresent controversies and extreme views, escalate interpersonal dramas, and encourage mob mentality[4]. It seems evident to me that the EA Forum exhibits all of these signs, and I’m not aware that anything meaningful has been done to target this natural phenomenon.

The important thing to note here is that in-person communications often directly avoid these issues, so it may be helpful to have some balance between synchronous and asynchronous interactions to avoid the worst of these issues. For example, a newcomer to EA might receive harmful notions of maximization from online content, then later shed these notions as a result of attending in-person EA events.

An extreme example of this is the online drama that unfolded regarding Nonlinear roughly a year ago. I think this is pretty much a textbook example of forums facilitating unnecessary drama. I think it’s reasonable to estimate that EAs witnessing and getting sucked into the online discussion and investigations collectively spent thousands of hours for virtually no gain while having no relevant connection to any of the parties or outcomes involved in the first place.

While it’s unhelpful to suggest that the primary people involved could have simply behaved differently, I think there are a couple of possible measures for reducing the amount of “collateral damage” in case of a similar event unfolding in the future.

Suggestions: For the sake of community norms, we could learn from past mistakes by acknowledging that the public-facing actions taken by certain individuals in the Nonlinear drama were probably unconditionally toxic and inappropriate. As in, all of their statements could have been 100% true, their intentions entirely good, and their chosen course of action would still have been objectively harmful to the community because of the natural consequences of that method of communication. We could promote awareness of this phenomenon in the first place, about the pitfalls of seeking trial/​justice through public court of opinion, and recommendations for how to resolve seemingly impossible conflicts. The EA Forum could forbid posts that meet certain functional criteria of being a “hit piece”. The admin team could also follow a guideline for recognizing and deleting such posts if intervening after some initial delay in which the drama spiral becomes evident.[5]

EA materials use violent language

From the perspective of Nonviolent Communication (NVC)[6], I believe that EA canon is often expressed using violent language, such as using “shoulds”, absolutes, and relying on logic that implicit denies alternative moral perspectives. Although I’m not sure if this is deliberate, it also doesn’t seem coincidental. Violent communication can go hand in hand with unsafe debate, overconfidence, epistemic misrepresentation, shame, and having polarizing effects on readers (including a tendency to increase cognitive bias). I suspect that it also contributes to the other risk factors already highlighted so far.

Suggestion: Continue encouraging one another to express epistemic uncertainties and truthiness, and acknowledge the unconditional validity of alternative perspectives (even irrational and “wrong” opinions) while avoiding “shoulds” and absolute claims about reality/​truth.

EA as a movement appeals to people trying to fill a psychological void

Social movements often resonate with individuals seeking a sense of purpose or to meet certain social needs such as identity or belongingness, to the extent that sometimes these needs play a more critical role than whether the beliefs of the group are consistent with the individual’s internal values and beliefs. EA is not exempt from this phenomenon, especially as EA targets and attracts young people. Young people are more impressionable and more likely to engage in movements in alignment with belongingness needs than older people[7].

I believe it is worth considering the psychological impact that EA may have on:

  1. People who hear about it casually as a one-off, e.g. stumbling across a random article

  2. People who commit to learning more about EA but do not end up joining the community

  3. People who become “moderately/​highly engaged EAs” and then leave the community within a few years

  4. People who resonate with ideas from EA and continue to do so without any first-hand interaction with the community e.g. working alongside colleagues who happen to be EAs and being curious about it without specific action or commitment

  5. Long-term moderately/​highly engaged EAs.

It seems to me that unhealthy messaging in EA can and does have long-term impacts on some people across most or all of these categories. But my point isn’t just about unhealthy messaging in general, but the selection effects that a social movement may have and the impacts of that. EA has some level of exclusivity in how it selects people[8], which is well and normal, but the type of exclusivity that seems to be present and might be harmful is elitism. Belongingness is a natural “question” arising in any movement. Elitism modifies that question into some form of “Am I good enough? Will I be accepted despite my imperfections?

An example is that whenever I talk to someone who wants to work in AI safety, we end up talking about their expected impact, how they feel about it, whether they think they have the raw talent to have a chance of doing impactful work, and how they feel about taking the spot for a specific opportunity, knowing that it might have been more morally correct to leave that spot to a hypothetical person who is smarter and more committed than them. While it is natural to wonder these things, it is also concerning that we have essentially normalized getting people to question whether they’re good enough. EA messaging and ideology directly contributes to this psychologically unhealthy comparison of our moral worth.[9] Many other examples of elitism in EA can be found here.

Another form of psychological harm occurs when someone joins EA for unsustainable reasons (e.g. subconsciously seeking to meet social needs while thinking that it’s due to genuine intellectual alignment), over-identifies with EA as a coping mechanism, and then eventually burns out and has a “tragic” disillusion-style fallout with the community. A more detailed description of this idea can be found here: “My Model Of EA Burnout” (Logan Strohl). Although I’m saying this type of journey is harmful, not all harm is preventable. All movements “facilitate” this type of journey being possible. That said, exclusivity and elitism tends to enable greater harm in the disillusionment process. This harm can be present across any of the 5 categories of exposure to EA, for example even someone who remains a dedicated EA for the rest of their life might still go through a burnout process, and the during the rougher periods before resolution, they might have expressed that disillusionment in ways that harmed other people, not just themselves. You probably know multiple people in your life who went through such processes and had a “toxic phases”.

Suggestion: EA could reduce the harm that it facilitates by providing better tools and support to help people understand which parts of EA is beneficial and practical for them to integrate, and which parts are not, as well as reassuring people that it is okay to disagree with EA meta. If this ends up helping some people realize that EA is not for them, this is a positive and healthy outcome for multiple parties. On a more systemic level, the community could make a conscious decision about whether to make EA ideology less elitist and exclusive.

Some EA sub-communities are said to be extremely toxic

This is more of a placeholder to acknowledge that I’ve come across several people’s accounts of extremely concerning sub-communities. On a surface level, these descriptions seem reminiscent of in-group/​out-group dynamics, status games, and issues such as discrimination, favoritism, sexual harassment, coercion, power-seeking, etc. But I have too little exposure to be able to say anything more.

Suggestion: Hire professionals to evaluate cultural safety in sub-communities where there are a lot of complaints.

The ideological and cultish aspects of EA discourage openness, diversity, and critical thinking

Plenty before me have highlighted these two aspects. My previously mentioned suggestions could alleviate these downsides.

EA+ online communities normalize bad self-care

I’m concerned by my impression that many EAs and rationalists are insular to when it comes to actually good mental health advice. My impression is unreliably based on seeing many posts[10] that seem to me like a pattern of rationalists trying to re-invent the wheel when it comes to mental/​emotional wellbeing rather than referring to existing bodies of knowledge. Although these posts aren’t necessarily super popular (with mental health being a relatively less visible topic in general), my concern is that they could be leading people towards an unhelpful direction. I feel that these posts often have sensible-sounding-but-actually-harmful ideas and there basically aren’t any competing posts with good ideas.

I’m not at all saying that these posts shouldn’t exist, that we shouldn’t share our personal perspectives even if we’re at parts of our mental health journey where we can’t tell which ideas genuinely help and which don’t. What I am saying is I believe that unsound ideas are naturally going to be the most interesting and visible ideas about mental health in the current online spaces, and we could try to counteract this if we wanted to, and this could potentially improve the community’s currently poor level of basic knowledge about mental health.

Suggestions: Commission a few mental health practitioners to write a few articles for EAs. They could tailor it towards the community’s needs and challenges. I can also imagine a few podcast episodes that could succinctly demonstrate the way therapy might explore common blind spots held by EAs, providing a rapid update with less resistance than other methods.

The EA community lacks good mental health support

It’s pretty hard to articulate what I think is bad about the status quo and how bad I think it is, so I’m mostly going to approach from the opposite direction and say what I think could make a positive difference to community mental health. We can proactively anticipate that there are certain parts of involvement in EA that can come with higher risk of mental health challenges. Possible examples:

  • People who recently joined EA and need help processing emotional burdens that come with moral contemplation

  • EAs who are full-time job hunting or experiencing burnout

  • EAs having interpersonal conflicts with other EAs where the circumstances are complicated due to consequences potentially affecting the wider community

  • EAs working in AI safety (this is a bit more niche).

Overall, I don’t think we really have anything effective in place to meet these needs, not CEA[11], not friendly people who put on their profiles that you can contact them to talk about literally anything[12].

Wild idea: Fund 3-4 counsellors/​therapists/​psychologists available for subsidized or free short-term treatments for individuals in the community. Although this would be a significant cost, it also has the chance of uplifting the community in radical and unpredictable ways. By my rough estimate, this capacity is actually enough to provide accessible mental health support for the entire highly engaged EA community.[13] Ideally, this mix of practitioners would cover multiple intersectional perspectives such as neurodivergence, knowledge and non-knowledge of EA/​rationality (I would argue that it’s specifically beneficial to include therapists who are not EAs), LGBT+, etc. If this is too big a project, a smaller version would be to fund a single therapist and limit their support/​availability to one niche, e.g. burnout or newcomers to EA.[14]

Normal suggestion: Have a friendly community contacts list somewhere that’s up-to-date. One model for this could be a volunteer service, e.g. community contacts can volunteer certain hours of availability on their calendar. Just having a friendly chat could be surprisingly effective for addressing some of the risk factors that newcomers face. This could even be proactive outreach targeting recent graduates of the EA introductory reading groups. If community contacts are interested in providing more “intense” support without being a therapist, there are possible modalities for this such as some versions of peer support that can involve just a small amount of training.

EAs are intersectionally at greater risk of mental health challenges

The “average EA” is more likely to be in multiple minority groups that each have an above-average bar to reach average well-being (e.g., ADHD, autism, LGBT+/​GSM, giftedness) across multiple health scales. Many people believe that EAs have a higher proportion of neurodivergent people than the general population. This implies at least the reference figure of 20% occurrence, though this reference figure itself is likely to be moderately underestimated too.

I have a few vague points about why this may be worth thinking about more:

  • If EAs are more miserable and unhealthy on average than normal people, this could be the case for undesirable reasons.

  • Even if there are no concerning reasons for this, not suffering from mental health issues can facilitate things like creative problem solving, making robust decisions, and having a scout mindset.

  • Minority groups are often systematically underrepresented in studies. We may be really keen to dive into science papers while unwittingly relying on studies that don’t reflect our demographics. This can matter a lot for things like mental health, nutrition, productivity, communication styles, career advice.

Vague suggestion: The idea of “understanding yourself” seems hugely underrated to me and could be promoted alongside the already popular scout mindset concept in EA.

More will be said on this theme, in the section focusing on undiagnosed neurodivergence.

EA probably massively incentivizes burnout

Here’s finally a concrete example of the type of blind spot EA seems prone to having around mental health. When EA asks us to focus on impacts that can be measured in numbers, and does not sufficiently mention the failsafes to detect when deciding based on numbers might be really really short-sighted, it means some proportion of people end up making choices that seem logical but are actually predictably bad (to an educated advisor), and they suffer the consequences for months, years, even decades before realizing and being able to change paths. Examples of this category:

  • 80,000 Hours recommends making logical decisions about career steps and choosing a career for impact. For a non-negligible proportion of people, this is literally the worst advice they could receive, because it seems sensible but simply does not work for some body/​brains and can result in both misery AND low impact.[15]

  • Ranking career options based on a spreadsheet, with the implication that everyone can consider trading off some of their needs for the sake of impact. When a decision is framed this way, it relies on the comparison formula being accurate for your actual needs, and it’s surprisingly easy to get your actual needs completely wrong in priority in favor of what seems to make sense.

  • “It makes sense for everyone to consider AI safety or veganism since orienting your choices towards these could potentially have greater impact than almost any other choices in your life.” Despite the compelling logic here, this kind of generalization and framing is somewhat problematic in terms of mental health and theory of change.

  • “Don’t do that even though you would love it, because you would have no impact.”

  • “Unless you’re really good at that career, you might not have much impact.”

  • Maximization, because it’s basically impossible for anyone to live up to that standard.

  • Too much emphasis on rationality /​ perfectionism in general.

  • Not having good boundaries around where EA fits into your life. I’ve seen examples of interviewees saying “I’ve never thought about that or had that concern” (automatic healthy boundary) and the opposite side “Yeah that’s really troubling, but I try not to think about it /​ solved it through rationality”. I don’t think I’ve seen examples of conscious healthy boundaries being represented in EA.

I believe that EA’s hyperfocus on numbers and rationality tends to result in over-valuing positive short-term outcomes at the detriment of failing to adequately evaluate long-term outcomes. All of the above examples involve ableism [16]and the theme of performing to a certain standard that is inherently unachievable or unhealthy for some people, down to their neurology. EA is somewhat rife with ableism. We’re intelligent, privileged, and care about living beings; so why shouldn’t we be able to do X, Y, Z? Unfortunately, internalized ableism has many negative effects, one of them being burnout.

Example: Even if AI timelines are very short (e.g. AGI within 5 years), would that make it worthwhile to have all our AI safety workers burn out on a scale of a few years?

Suggestion: Hire a specialist in burnout to evaluate EA culture, identify relevant risk factors, and come up with concrete strategies for reducing them.

Undiagnosed neurodivergence in EA

I believe undiagnosed neurodivergence to be a major risk factor to mental health in EA. There are too many angles to cover, I’ll include just a few:

  • Ableism is a common lens through which people hold inaccurate beliefs about reality, but particularly beliefs about their own body (internalized ableism), and sometimes also inaccurate beliefs projected onto other people’s bodies. Within EA, ableist advice often suggests that people do exactly the opposite of what their body needs from a neuropsychological self-care perspective, and unfortunately this advice often is the most unfortunate mix of sensible sounding, popular, difficult to refute without nuanced guidance, and short-term rewarding.

  • A lot of advice in EA is somewhat good for neurotypicals and somewhat bad for neurodivergents, but I see very little awareness or acknowledgement of the latter.

  • Organizations and projects can benefit from both neurotypical and neurodivergent thinking, but disability accommodations may be required to support neurodivergent people to perform to their strengths in a sustainable manner. This applies to EA circles in a lot of ways, from making workplaces, projects, resources more neurodivergent-friendly, as well as individuals empowering themselves through self-knowledge.

Awareness of neurodivergence worldwide is starting to gain traction, though our scientific understanding remains extremely limited. I’m hoping there will be a major revolution in terms of public acceptance of neurodiversity and related disability rights. I believe that EA and the especially the rationality community could benefit from not being insular to this.

EAs and rationalists are at greater risk of unhealthy rationalizations, and EA+ material makes this worse

Humans are not rational beings, and there are limits to how much emphasis people can place on reason, logic, “accounting”, maximization, getting things right, and so on in their lives before it may interfere with wellbeing. This is a common theme in psychotherapy, but let me clarify my concern using an aggressive generalization. When rationality plays a strong part in someone’s life, rationality is functioning as one or more of these four things:

  1. As a hobby, because it’s fun or interesting

  2. As a tool that has practical value under certain circumstances

  3. As a habitual coping/​defense mechanism, likely linked to trauma (e.g. to avoid criticism, or provide a sense of control and certainty)

  4. As an arbitrary subject being used in relation to one of the above three categories (e.g. tool for status signaling, coping mechanism for identity formation).

The third category is the one I want to highlight as unhealthy. When rationality is wired into a person’s behavior as a defense mechanism, they are more likely to: engage in motivated reasoning and rationalization, suppress their emotions and neglect good self-care, have a soldier mindset, make poor long-term decisions held with high conviction, hold themselves to unreasonably high standards, promote ableist ideas, and burn themselves out.

While we can’t read people’s minds and tell for sure how much rationality is being used as a coping mechanism, there are certainly some signs, themes and rhetorics that are frequently associated with “coping-rationality” while rarely being associated with the other types.

Assuming that “coping-rationality” within EA is no more common than in the general population, it still seems likely that EA is amplifying more extreme versions of harmful ideas. This is because we engage deeply with these ideas, push their logical implications to extremes, and actively promote these conclusions within our community and beyond. My concern is that the EA community might be amplifying ideas influenced by mental health factors rather than clear, sound reasoning—and that we’re not only acting on these ideas ourselves but also spreading them more broadly.

Wild idea: Form a team of people who are good at generalist technical critique, they can act like a peer-review consultancy service within the EA ecosystem.

Ways in which EA fundamentally conflicts with mental health

Maximization is objectively bad, yet this concept persists in EA

Even though it seems plausible that there are very few EAs who identify as maximizers, maximization keeps appearing as a point of contention in EA discourse. Almost any form of maximization as a lifestyle is likely to be neutral at best, unhealthy at worst, with maximization of any rational endeavor skewing towards predictably unhealthy and harmful. Maximization is fundamentally incompatible with good mental health. You can’t “just have a little bit of maximization”; it’s all or nothing. One could try to do better by establishing boundaries around maximization, but that’s simply not maximization anymore. We also can’t just “rationally model our irrationality so that we can just adopt a more informed rational thinking while meeting our irrational needs”. I haven’t come across any modern framework of human wellbeing that suggests it can work, or that rationality is on the same level of fundamental needs such as love, safety, purpose, etc.

So long as EA ideology glorifies maximization as an ideal, EA is bad for mental health. I would go further and say that failure to educate EAs about the intrinsically harmful nature of maximization is a failure in terms of mental health.

Side note: One might argue that it might be worth degrading the mental health of EAs in exchange for saving millions/​billions/​trillions of lives. I would argue that this trade-off is not realistically available, because hyper-rationality leads to worse decision-making. It’s a lose-lose situation. This post focuses on the mental health angle so I leave my explanation in the footnote.[17]

EA ideology fosters unsafe judgment and intolerance

My argument takes the following structure:

  1. EA values rationality over irrationality.

  2. In doing so, EA makes value judgments on irrational decisions and actions.

  3. Applying ethical frameworks that EAs commonly hold, irrationality is labelled as invalid, bad, and wrong due to its lower value.

  4. EA as a community normalizes making such ethical judgments, applied both internally towards oneself and towards other people and the world.

  5. Mental wellbeing frameworks generally hold that all perspectives are valid, including things that EA’s ethical frameworks assert as bad/​wrong/​invalid.

  6. Ethical claims are often made using the same language (bad, wrong, worse, false, etc) without acknowledging the underlying framework, which leads to misunderstandings.

  7. Therefore, ethical frameworks and mental wellbeing frameworks cannot coexist naturally without conflict, unless a higher framework is used to integrate these clashing perspectives in a healthy way[18].

  8. Mental wellbeing frameworks also tend to hold that overemphasizing rationality, certainty, correctness, and control is damaging to self-care and self-esteem.

  9. An environment that normalizes forming and assessing ethical judgments is psychologically harmful to both individuals and their audiences, due to both the impact of judgmental language as well as the psychological burdens associated with deep contemplation of certain topics.

  10. Some proportion of people do not already have a solid mental wellbeing framework, let alone a higher framework when they first show interest in EA. After integration into EA, it seems plausible to me that such people may initially become even be more disadvantaged than before, especially since EA content skews towards the ethical frameworks.

  11. EA ideology is harmful for mental wellbeing because it values and emphasizes the ethical side over the wellbeing side. Hypothetically, this can be addressed by acknowledging and promoting a suitable higher framework.

Even if my argument holds, it can be tricky to gauge the significance and impact of not having a suitable higher framework. I would probably summarize my main concerns as:

  • When we use ambiguous language (c.f. point 6 & 7), some proportion of both EAs and people reading about EA are genuinely not aware of or able to easily switch between these two distinct frames, so we do amplify harmful misunderstandings at times.

  • In my opinion, EA is historically so heavily slanted away from wellbeing, such that taking the “extreme” action of adopting a higher framework carries zero risk of over-representing mental wellbeing. Conversely, if we don’t do something “extreme” like adopt a higher framework that strongly acknowledges both frameworks, then mental wellbeing will continue to be disincentivized, under-represented, and neglected by EAs.

  • I genuinely believe that the judgment aspect of EA canon has psychological implications on individuals, cultural safety, openness, decision making, and community health, but it may be too nuanced a topic to flesh out here.

I feel skeptical about the idea that EA as a movement can adopt a suitable higher framework, because it requires significant undoing of existing EA canon, a significant injection of outside expertise, and a significant drive coming from a community of members who were drawn to the ethics/​rationality side to begin with.

There is a simple solution to this on an individual scale: regard EA as a tool or hobby with major flaws and limitations, not as a complete ethical philosophy and way of being/​thinking with potentially unlimited applications[18]. I think it’s worth clarifying that individuals with healthy self-esteem and self-care are more likely to be doing this automatically, even without thinking about it. For the rest of us, ideas such as having a “morality budget” may be helpful as guidelines for thinking about healthy boundaries.

EA is incompatible with alternative values that can be healthy

Previous critiques of EA have made the point that EA is not truly and meaningfully open to all questions, and that subsequently it is unable to act on some possibilities, even when there are good reasons to suspect that those possibilities may be better than EA’s current strategies.

For example:

  • Could we be undervaluing irrationality? Could instinctive decision-making lead to far better outcomes than rational thinking under certain contexts? (The obvious answer to this question is yes, but what are the chances of getting a grant for a project on this premise?)

  • Could we be overvaluing life and undervaluing death? Could we be incorrectly valuing productivity over non-productiveness? Happiness over suffering?

  • Could 80,000 Hours have it all wrong? We may know based on economic research that our gut instincts are sometimes drastically wrong, but that doesn’t mean that not following our gut instincts necessarily leads to better long-term outcomes. Isn’t it almost certainly the case that some people rely on their gut instincts too much and some people don’t rely on it enough? I can come up with realistic examples where 80,000 Hours does have it all wrong (relative to “best practices” according to psychologists).

EA fundamentally discriminates against certain perspectives, not only making certain solutions unavailable in practice, but also undermining scout mindset, cultural safety, and open collaboration. EA is only tolerant towards a small minority of viewpoints out of all the clusters of viewpoints that exist in the world. But many of those viewpoints are “healthier” than accepted EA viewpoints.

EA devalues human life based on the arbitrary implications of capitalism and privilege

EA seems to imply that all human life is equally valuable, but for practical and ethical reasons, this means we should save people who can most easily/​cheaply be saved, as well as favor individuals who can do the most good. This is unjust from a human rights perspective, and this exact reasoning can be used to justify elitism, discrimination, genocide, and all kinds of other injustices.

Ethics is very much unsolved in that every ethical framework you can do math with has at least one really stark edge case that doesn’t seem acceptable, so my gripe isn’t that EA doesn’t have a magic solution, but that EA very much does apply ideas that propagate injustice, and it applies these ideas despite the fact that, in my opinion, there are tenable non-ethical perspectives that do not share the same problems.

I think EA fails the “veil of ignorance” test in its attitude towards non-EA altruists and even some subset of EAs. For example: Suppose, an EA suddenly inherits 10 million dollars, and initially intends to donate 9 million dollars to AMF over a 20 year period while retaining a certain amount to seed a FIRE-based lifestyle. However, they suddenly fall critically ill and get diagnosed with an ultra rare disease. Their maximum remaining lifespan is estimated to be 10 years, but only if they receive a rare experimental treatment that also costs 1 million dollars per year to administer. They decide they want to try and enjoy another 9 years of their life, contributing only 1 million dollars to AMF.

If you tweak the numbers and rarities in this anecdote enough, you can make it resemble what it’s like to be an EA struggling with intersectional disprivilege, including neurodivergence, chronic illness, and other disabilities. “All else being equal”, our lives are devalued by the current version of EA that lacks a higher framework as mentioned earlier.

Closing remarks

In conclusion, I currently hold the following views:

  • EA has many significant blind spots when it comes to mental health. Even in the absence of quantifiable evidence for this, there are many reasons to suspect that EA may have these blind spots for systemic reasons. These blind spots may have some casual relationships with some real and significant mental health factors affecting community health.

  • Some of EA’s potential blind spots towards mental health can be concretely addressed. Most of my suggestions involve getting outside professional perspectives on the community, because I believe that existing bodies of knowledge such as psychotherapy have been surprisingly under-represented in EA discussions, with only a few notable exceptions (such as Rethink Wellbeing’s programs).

  • The EA ecosystem may currently be acting as a breeding ground for harmful ideas and ignorance towards mental health, while being dangerously unaware about it.

  • EA is particularly susceptible to supporting ideas that externalize long-term sacrifices in the mental health of individuals in favor of short-term measurable outcomes, due to lack of knowledge about long-term risk factors to mental health.

  • EA’s emphasis on measurability and rationality makes the community susceptible to making objectively suboptimal decisions in a way that “rationality done better” cannot necessarily overcome.

  • EA ideology fundamentally clashes with good mental health and social equity, and this can only be resolved by either 1) diligently acknowledging its shortcomings or 2) making a drastic adjustment such as adopting a higher-level framework that integrates mental health concepts with ethical concepts.

  • I’m somewhat skeptical that EA will naturally drift towards healthier mental health perspectives over time in the absence of specific and significant actions.

  • In light of the above reasons, I believe that there are some contexts in which EA-aligned approaches to mental wellbeing as a cause area will have decidedly less impact than non-EA approaches.

  1. ^

    By systemic, I mean that EA as a community and movement has incentives which extend from EA’s core ideas, ultimately having a tendency to favor harmful mental health norms. This means that 1) EA is likely to be resistant to improving its mental health norms, and 2) if these harmful norms were magically removed, new or similar harmful norms would re-emerge over time.

  2. ^

    My stance is that this is “obviously correct” from any informed view of mental health, to the degree that casual skepticism about this is not worth considering. But I am happy to hear any informed views that present an opposing conclusion (not just skepticism), though I would be surprised to hear that any exists.

  3. ^

    It would be interesting to estimate the proportion of introductory program graduates that become EAs. This falls outside of CEA’s past focus on retention, which targeted EAs who rated themselves as already highly engaged.

  4. ^

    Here’s a helpful explanation and anecdote about problems with asynchronous communication.

  5. ^

    I know that I’m being a little bit vague here, mostly because I don’t want to introduce nuance within a complicated and controversial topic that may detract attention from more central ideas in this post.

  6. ^

    NVC is not very popular as a communication framework, but seems surprisingly overrepresented among EAs and rationalists. I couldn’t find a short article that explains how it relates to logical debate, so I picked a humorous TEDx Talk about it instead.

  7. ^
  8. ^

    Extremely low exclusivity might be something “if you’ve ever had a positive thought about wanting to help another human being, then you’re an EA; you fall somewhere on the EA spectrum”. Extremely high exclusivity might be something like “to be a real EA you have to be a maximizer”.

  9. ^

    I believe it is naive to say that “all we did was apply some rational thinking and ask sensible questions based on a few assumptions, how can that be psychologically unsafe?” In my understanding, it is unsafe, there are frameworks in which similar themes can be explored safely or at least with a healthier trade-off, and EA as an ideology thus far does not seem to value safer alternatives.

  10. ^
  11. ^

    I have a highly negative opinion about CEA’s role/​impact on online community health, but it seems unproductive to say more.

  12. ^

    Anecdotally, at a response rate of say 10% within a month timeframe, this doesn’t seem very accessible to me for someone in a time of specific need.

  13. ^

    Example: 4x counsellors/​therapists, with individual mean salary of US$100k, doing up to 20 sessions per week for 46 weeks each. This is a total cost of $400k for up to 3680 sessions, but let’s account for 25% wastage in unbooked sessions, leaving 2760 sessions. (There are also options for partial subsidization, e.g. 50%.) We can allocate availability for therapy using a similar scheme to the way universities with more than ten thousand students do when offering free counselling. The treatments offered are generally short-term interventions, e.g. 4-8 sessions targeted at a specific problem area. This is done at the discretion of the therapists, who can also take into account the overall availability of sessions. If the sessions are under-utilized, they can see clients much longer term, or if the sessions are over-booked, clients who need longer treatments may be referred to external options after a certain number of sessions. If we crudely say that each individual receives an 8-session treatment once a year within this system, that means we can treat about 345 individuals. The last estimate of number of EAs (in 2020) was 10,000 total, with 2,600 being “highly engaged”. If we assume that the subsidized service is promoted towards highly engage EAs, and even so that most EAs won’t hear about or consider using this service no matter how it’s promoted, 345 is 13% of the highly engaged community. Although I don’t have any empirical data, I would tend to expect actual demand to be lower 13%.

  14. ^

    I’m under the impression that there was a therapist funded purely to support AI safety researchers at some point, though arguably this does not necessarily impact the general EA community as is the point of my suggestion.

  15. ^

    I’ll raise a general point here about 80,000 Hours: their career guide is completely subjective, as in, there is no evidence for their career guide being effective and it’s just one guess among many possible clusters of valid guesses about good career advice. This is not a criticism, just a note that their guide could be extremely flawed while being a perfectly “sensible” guess based on reading the relevant literature.

  16. ^

    “Ableism is the discrimination of and social prejudice against people with disabilities based on the belief that typical abilities are superior. At its heart, ableism is rooted in the assumption that disabled people require ‘fixing’ and defines people by their disability. Like racism and sexism, ableism classifies entire groups of people as ‘less than,’ and includes harmful stereotypes, misconceptions, and generalizations of people with disabilities.”

    Ableism doesn’t have to be overt or come from bad intentions in order to be harmful. Simple generalizations about people’s abilities can harmful or discriminatory. For example, a candidate at a job interview might appear anxious, timid, and lacking in confidence. Even if the job interview is for a customer service role, it is ableist to assume that their anxious manner during the interview means they would have a similar manner in their actual role. They could be anxious specifically during interviews, or have nearly been hit by a bus right before the interview, or otherwise be good at building an unexpected kind of rapport with customers.

  17. ^

    I believe that EA dangerously lacks skepticism about the limits of rationality, in a way that leads to wrong conclusions with high confidence. There are limits to how rational human beings can be. I believe these limits are measurable, and they’re much lower than EA/​rationalists would like to believe. For example, we encourage EAs to be aware of cognitive biases and rationalization (a defense mechanism where we deceive ourselves with flawed logic because we want something to be true), yet there is no clear evidence that training ourselves to be more rational actually works. There are also many real-world contexts with incomplete information where rational thinking is actually more likely to lead us to grossly incorrect conclusions.

    No one is immune to cognitive biases. We either accept that we are biased, try a healthy amount to reduce bias without expecting that we necessarily succeeded, or try really hard not to be biased and deceive ourselves into thinking we succeeded. I suspect that a fair number of EAs fall into the last camp, especially those who carry a sense of pride and identity based on their faith in rationality and science.

    As a more tangible example, I think there many reasons to be skeptical that donating to AMF is really one of the best strategies for doing good. There are a ton of possible scenarios where we might look back on this and realize we were baited into overconfidence just because our current thinking has a certain “rational aesthetic”.

  18. ^

    It seems to me that any healthy model would place mental wellbeing at the absolute foundation, with ethics as an optional choice, though I could potentially see a case for other models being promoted for strategic reasons.

  19. ^

    Technically, neurodivergent burnout, which is typically much more severe and longer lasting than normal burnout.