Personal Perspective: Is EA particularly promising?
(Edited)
Summary: During a personal decision-making process, I wanted to examine my understanding of the promise of the community and decided that it was useful for me to make it into an okay-ish post. The tentative conclusion of the promise of EA from my point of view: 6⁄10 (where 10 is an incredibly flourishing state).
Intro
I’m currently in the phase of making some important personal/professional decisions related to the EA mission. Therefore, I wanted to examine my current understanding of its promise. However, in the process of doing so, I observed that I was too sloppy in my thinking and it became clear that approaching it from a perspective of wanting to post it was very helpful for me.
I’d be deeply grateful if you could strengthen my understanding e.g., by pointing me to important discussions, giving me a different framework to think about value, and pointing out suboptimal assumptions.
Also, while I think there’s an important distinction to be drawn between the EA mission and the EA community, I’m mainly lumping the two together in this post.
Tentative conclusion:
Intuitive rating: 6⁄10 (where 10 is an incredibly flourishing state). EA has made outstanding progress in terms of understanding value, influencing according to those values, and being antifragile. However, when grounded in the bigger scheme of things (e.g., Elon Musk’s net worth) and considering existing problems (e.g., lack of individual flourishing and the presence of mental health issues) my previous unconditional enthusiasm decreases. I’m in the process of thinking about the implications but this post has already evolved more than I expected so coming up with constructive solutions is beyond present-me.
Three (abstract) conditions for bringing about long-term value
What has to hold true in order for something like a community to be promising? There are probably excellent models out there but for pragmatic reasons, I generated a framework.
Understanding true value
Understanding what’s actually valuable (and preferably what’s less valuable) is needed. The vector must be pointing in the right direction.
Influencing / creating according to values
Understanding isn’t enough—a capacity for influencing / creating according to the values is also needed. The vector must have a non-trivial length.
Being antifragile (improving in the face of mistakes + being sustainable)
It’s highly unlikely that any process will be working well from its inception. Therefore, being antifragile in the face of uncertainty and mistakes is crucial. My vector model is too simplistic and breaks down now but I think this is an important notion as Nassim Taleb describes this quality: “Wind extinguishes a candle but energizes fire”.
Assessing my impressions of the community
Understanding true value (excellent contributions to the world or qualities of the community)
Intuitive rating: 8⁄10 based on below.
Normative
Moral uncertainty. Moral parliament.
Impartial altruism/impartial welfarism.
Expanding circle of compassion / anti-tribalism/ anti-speciesism.
Long-termism.
In-between normative and epistemic
Global priorities research.
Existential risks.
Suffering risks.
Transcending important unfortunate tendencies of the mind
Scope insensitivity.
Confirmation bias.
Tribalism.
Crucial considerations.
Epistemic
A massive window of discourse.
Cruxes and scout mindset.
Probabilities.
Predictions and betting.
Finding mistakes.
Lacks (or growth areas)
Understanding human flourishing / psychological well-being
It seems to me that most people aren’t flourishing by having a strong sense of agency, hope, and purpose while pursuing what they believe is the most important thing they could do with their life. Holden Karnofsky appears to be a rare example of someone who’s doing this. For more, please go to counter-considerations.
More diverse perspectives from other fields/paradigms (e.g., social sciences and humanities) as opposed to CS, Econ, Philosophy, and Math. Probably less elitist / high-achieving folks.
The language, models, and thinking styles prevalent in EA are heavily skewed towards a narrow set of academic disciplines. This probably affects what crucial considerations are being considered and how they’re being considered.
Influencing / creating according to values
Intuitive rating: 6⁄10. Overall, this is highly promising given how non-common-sensical and, at least at first, inconvenient some of the ideas can be.
Organizations and institutions
10-50 decently successful organizations (compared to global reference frame) founded in the spirit of EA.
Integrated in historical and influential institutions (FHI. GPI.).
Some influence of key governance bodies
CEA (Toby Ord) advises UK Government and UN.
UN’s report appreciating and proposing specific initiatives to long-termism and existential risks.
People involvement
222 local groups around the world.
For reference, Globalshapers claim to have 457 local groups.
Based on my experience with local groups outside of EA vs. inside EA understanding of the local groups, then EA local groups seem particularly well-run and some are even professional organizations.
Conferences with 800 attendees.
Capital committed / influenced
Money moved or committed to EA is estimated to ~ $46Bn by Ben Todd (source)
This is around 1⁄5 of Elon Musk’s wealth or 0.05% of the world’s GDP.
This is certainly good but not mind-blowing when compared to the above estimates.
Changes in attitude, culture, and psychology
I didn’t have time to go into this but seemed like an important aspect of influence.
Public influence, recognition, and role-models
A bit of attention from some of the most influential figures of our time. E.g., Bill Gates tweeting about Will MacAskill and Elon Musk blurbing about Superintelligence.
Will MacAskill is doing exceptionally well (TED talk, appearances on some of the most popular podcasts)
MacAskill getting on Forbes 30 under 30.
There are 600 on the list per year which makes this less substantial.
5-10 important books written (e.g., Superintelligence on NYT bestseller list).
Talent attraction
E.g., Sam Bankman-Fried.
Being antifragile
Intuitive rating: 8⁄10.
Distribution
There aren’t a single founder or a few handful of people who are obviously indispensable or would drastically hurt EA if doing something scandalous or (makes me upset to say) would pass away. However, it isn’t a grassroots movement either.
Improvement process
Truth-seeking principles are a key part of identity.
E.g., several organizations have an “our mistakes” page.
There’s a data-driven approach to understanding the community and coming up with solutions. E.g., CEA or Charity Entrepreneurship’s approach from last year.
Longevity (time—not health)
10-50 moderately successful organizations founded in the spirit of EA.
50% of smaller businesses don’t last for more than 10 years. 12% sticks around for 26 years or longer (source).
Integrated into historical and influential institutions (FHI. GPI.).
I couldn’t find a base rate for the longevity of departments at prestigious universities so I’m unsure how strongly I want to update here.
Counter-considerations in disfavor of the community (aka Growth areas)
Lack of individual flourishing (e.g., agency, hope, and purpose) + a high presence of mental health issues
I think there’s a strong lack of agency, hope, and purpose in the eyes of the people that I meet in EA. Encountering EA could be among the most empowering and purpose-providing things that people come across in their lives (at least as a by-product) but that’s really not the sense I have.
I don’t mean to say that EA should become naive and unconditionally enthusiastic. But even attempting to prevent X-risks or reducing the horrendous suffering of animals can be compatible with a sense of agency and purpose. Once again, Karnofsky might be a role model wrt. this.
Relatedly, I find it highly worrying that there’s a high presence of mental health issues. 71% of the 303 participants in the EA mental health survey have been diagnosed or intuit that they have one. Additionally, the motivation and self-care workshops at conferences seem crowded. While the mental health survey found an insignificant negative correlation between involvement in EA and mental health problems, I think that it’s at least probable (30%-70%) that there’s a positive correlation—at least for some significant fraction of the community. Even if there isn’t a meaningful positive relationship, it seems to me that this is a substantial growth area as it represents a potential major impediment to the community’s mission and potential.
For reference, I think that the optimize.me community is doing a better job at this. I.e., encountering the community seems to be among the better things that happen to people in terms of their individual flourishing. In a sense, this is to be expected given that it’s closer to the purpose of that community (whereas personal flourishing of members isn’t the scope of the EA community) but I still think it’s underappreciated by the community—for intrinsic flourishing reasons and instrumental reasons.
Given that I’m a coach, I might be putting this on a pedestal. On the other hand, I think that this also makes me more able to give this proper appreciation.
Homogenous demographics
Predominantly white and male. As a white male, I’m almost certainly underappreciating the scope and consequences of this problem. However, as far as I can tell, this is getting attention and CEA appears to prioritize this and there seems to be some progress from 2019 to 2020 which is great (source).
It’s kind of a youth movement.
Youth movements tend to be overly naive and prone to biases.
It’s elitist which can lead to
Folk condemnation and resistance?
Biases.
Appendix (Example of update )
I started out with the impression that EA strongly lacks vision compared to my impression of its best version. I tried to find counter-evidence and I think that I have to abandon that view and refine it because clearly the Precipice is amazingly visionary and CEA is at least moderately visionary (see below).
CEA (Source)
“CEA’s overall aim is to do the most we can to solve pressing global problems — like global poverty, factory farming, and existential risk — and prepare to face the challenges of tomorrow.”
“If we succeed, the next generation of leaders, thinkers, and philanthropists will be focused on addressing these problems as effectively as possible.”
The Precipice by Toby Ord
Existential Security
“It is about the canvas on which we shall work: the lengths of time open to humanity, the scale of our cosmos, and the quality of life we might eventually attain. It is about the shape of the land we are striving to reach, where the greater part of human history will be written.”
“It requires only that if we could survive long enough, and strive hard enough, then we could eventually travel to a nearby star and establish enough of a foothold to create a new flourishing society from which we could venture further.”
This is a great post and I think this type of thinking is useful for someone who’s specifically debating between working at / founding a small EA organization (that doesn’t have high status outside EA) vs a non-EA organization (or like, Open Phil) early in their career. Ultimately I don’t think it’s that relevant (though still valuable for other reasons) when making career decisions outside this scope, because I don’t think that conflating the EA mission and community is valid. The EA mission is just to do the most good possible; whether or not the community that has sprung up around this mission is a useful vehicle for you as an individual to do the most good you can is a different and difficult question. If you believe that EA as a movement will grow significantly in wealth and ability to affect the world, you could rationally choose to align yourself with EA groups and organizations for career capital / status reasons (not considering first-order impact). However, it seems like the EA community greatly values externally successful people, for instance when hiring; there’s very little insider bias, or at least it’s easy to overcome. When considering next steps I think the mindset of “which option maximizes my lifetime impact” is more correct and useful, though harder to answer individually, than an indirect question like “which option is more aligned with the current EA community” or “which option is ranked higher by 80000 Hours” in almost all cases. I’m sorry if I misunderstood your post, I’m trying to sort out my own thoughts as well. Again, conflating the community and mission is still a useful approximation if you’re considering working for one of the smaller EA organizations, or in a ‘smaller’ role.
Excellent comment. I’m mainly considering the first set of options that you’re pointing to which means that the mission and community is pretty closely connected. I’m curious, where did you get the “lifetime impact mindset from”? It seemed original to a small group of people so I’m happy that it’s used more widely. With that said, very early on I think it’s more useful to think in terms of experiments, heuristics, and (maybe) a decade hence because early on most have lot of experience to gather about themselves and the world (although this can still be done within the larger frame of lifetime impac). But I’m starting to move away from early career and have more data and conviction in personal fit so I can make stronger decisions. I can also recommend the podcast with Holden Karnofsky and the book Range by David Epstein.
That’s a good point, at my level thinking about the details of lifetime impact between two good paths might be almost completely intractable. I don’t remember where I first saw that specific idea, it seems like a pretty natural endpoint to the whole EA mindset. And I’ll check out that book, it’s been recommended to me before.
The EA Mental Health Survey may have involved heavy self-selection for mental health issues, so I would be careful about giving it much weight as representative of the community.
I agree that it seems likely that there were selection effects. However, I’d be surprised if the true proportion is less than 40% (the survey had 71%). I base this on Lynette Bye’s estimate in this footnote and my own anecdotal impressions from FB, conferences, the public discussions around taking anti-depressants, and friend groups. Given the ramifications that this might have in terms of immense loss in instrumental value and needless human suffering, I still think it’s a massive growth area for EA if the community wants to approximate its flourishing state.
To mention but a few of the potential instrumental effects: Organizations not doing what they’re capable of (e.g., due to lack of hope and excitement around the vision and people not being able to work purposefully and productively) and individuals not doing what they’re capable of (e.g., because they are feeling overwhelmed and depressed about the day ahead instead of being excited and on a long-term mission). Also, this might have substantial negative signaling effects for people new to the community as I think EAs might be particularly high on a certain sub-set of unfortunate dynamics (e.g., guilt) while probably being similar to the general population wrt. some conditions (e.g., depression). But I feel this would benefit from being a full post.
Thanks for sharing optimize.me! It’s really cool how the app lets you read/listen to good summaries of books on positive psychology and other topics. I think EA has a lot of room for improvement in terms of supporting members to not also work on pressing issues but also personally thrive in doing so, and I like how you’ve highlighted that. Where can I find the Optimize community?
Great that you like it. I’m so happy for you!
The best way of getting more involved would be one of the following:
Look for a meet up via meetup.com or elsewhere.
Join their coaching course.
Become a Founding member of their upcoming app and/or wait for the launch of their social media platform.
Become an early investor (you don’t have to invest much - $100 used to be perfectly fine as there crowd-funding the enterprise) and you’ll be invited to a couple of talks.