Hi, thank you for starting this conversation! I am an EA outsider, so I hope my anecdata is relevant to the topic. (This is my first post on the forums.) I found my way to this post during an EA rabbit hole after signing up for the “Intro to EA” Virtual Program.
To provide some context, I heard about EA a few years ago from my significant other. I was/am very receptive to EA principles and spent several weeks browsing through various EA resources/material after we first met. However, EA remained in my periphery for around three years until I committed to giving EA a fair shake several weeks ago. This is why I decided to sign up for the VP.
I’m mid-career instead of enrolled in university, so my perspective is not wholly within the scope of the original post. However, I like to think that I have many qualities the EA community would like to attract:
I (dramatically) changed careers to pursue a role with a more significant positive impact and continue to explore how I can apply myself to do the “most good”.
As a scientist for many years, I value evidence-based decision-making and rationality, both professionally and personally.
I have professional experience managing multiple projects with large budgets and diverse stakeholders. This requires 3 of the six skills listed in the top talent gaps identified on your Leadership Forum (as mentioned in the original post).
As a data scientist, I have practical & technical expertise in machine learning (related to the last talent gap in the list mentioned above).
I’m open-minded. (No apparent objective evidence comes to mind. I suppose you’ll have to talk to me and verify for yourselves. :-) )
If we agree that EA would prefer to attract rather than “turn off” people with these qualities, then the following introspections regarding my resistance to participating in the movement may be helpful:
The heavy, heavy focus on university recruiting feels …. off.
First, let me emphasise that I understand all the practical reasons for focusing on student outreach. @Chris Long does a great job listing why this is an actionable, sensible strategy in this thread. I understand and sympathise with EA’s motivations. My following points are from an “EA outsider” perspective and others who may not care enough to consider the matter more deeply than their initial impression.
Personally, ‘cult’ didn’t immediately come to mind despite being a common criticism many of you encountered. Still, the aggressive focus on recruiting (primarily young) university students can seem a bit predatory. When there is a perceived imbalance in recruitment tactics, red flags can instinctively pop up in the back of people’s minds.
The EA community seems homogeneous—and not just demographically.
The homogeneity is a natural consequence of the heavy focus on university outreach. Whenever I encounter EAs, I’m generally the oldest … and I’m only in my 30′s! (Is there a place in this community if you’re not fresh out of uni?) The youthful skew of the community contributes to an impression that there is a small group of influential figures dictating the vision/strategy of the movement and a mass of idealistic, young recruits eager to execute on it. People who get things done want to find other people who get things done. It’s not reassuring if it feels like the movement is filled with young (albeit talented & intelligent) people who can barely be trusted with leading student groups (requiring scripts, strict messaging, etc.).
Since the aggressive university outreach focuses on prestigious institutions, the group can seem elitist. Again, I understand the cold realities of this world mean that there are practical considerations for supporting this approach. As an outsider, it isn’t easy to discern if the pervasive mentions of top institutions are for practicality or for signalling. I also understand the importance of epistemic alignment. However, when the EA Global application requirement (as an example) is juxtaposed alongside aggressive recruitment at top universities, it starts to seem like EA is looking for “the right kind of people” to join their club in a less benign sense. Admittedly, I have a giant chip on my shoulder from my upbringing on the wrong side of the socio-economic tracks. Even with that self-awareness (and a Berkeley degree), some of my hesitancy to engage is the concern that my value to the community would not be judged mainly on the merit of my contributions but rather on my academic pedigree. I value my time and energy too much to play those games.
Breaking into the hive mind
EAs seem uniformly well-informed and studied on a core body of seminal studies, books, websites, and influential figures. Objectively, it’s a credit to your community that there is such high engagement and consistency in your messaging. To an outsider, it feels like a steep learning curve before being considered a “real EA”. (Is there an admission exam or something? Do I need to recite Peter Singer from memory? :-) ) This is more of a compliment than anything. Maybe just be mindful of what you’re trying to achieve when you name-drop, cite a study, or reference philosophy terminology in conversation. Is the motivation in doing so to communicate clearly or to posture? At best, EA newbies may feel intimidated. At worst, they/we may get defensive.
To a natural sceptic and critical thinker, the uniformity also feels a little like indoctrination. What are the areas of active constructive disagreement? Does the community accept (or even encourage) dissenting (but well-reasoned) opinions? What are the different positions? What are the caveats of the seminal studies? It’s not apparent on the surface, and free-thinkers are generally repelled at the notion of conformity for the sake of belonging. (In the “Intro to EA” Virtual Program syllabus, I noticed that there is attention to EA critiques. I’m looking forward to experiencing how that conversation is facilitated.)
Does EA care about anything other than AI safety nowadays?
I’ve read about all these significant EA initiatives tackling malaria, global poverty, factory farming, etc., during my first exploration of the movement a few years ago. But nowadays, it seems that all I hear about is AI safety. Considering how challenging it is to forecast existential risk, are you really so confident that this one cause is the most impactful, most neglected, and most tractable that it warrants overshadowing all the other causes? I agree that AI safety is an important cause that deserves attention. However, the fervour around it seems awfully reminiscent of the “Peak of Inflated Expectations” on the Gartner Hype Cycle. It’s not so much that I have anything against AI safety, in particular. The impression of “hype” itself is not a great look if someone is seeking a community of critical thinkers to engage. Combined with the homogeneity of the community, it makes me suspicious of “group think”.
I want to explicitly state that I know that not all of these impressions are entirely true. I know that EAs aren’t all out-of-touch, pretentious jerks. The 80,000 hours job board has several postings across many cause areas aside from AI safety. The impressions described above are primarily from my perspective before actively trying to vet my concerns. However, I imagine that others who share these impressions don’t bother to validate their concerns before dismissing the movement.
So why did I go through the trouble of digging deeper? Well, probably because EA is the closest I’ve found to a community consistent with my own values, motivations, and interests. Despite my reservations, I really want my concerns to be wrong and for EA to work. More importantly, I’ve grown to trust the values, motivations, judgement, and competency of my significant other, who is committed to EA’s mission. Through him, I’ve met other EAs who are also great people. Quality people tend to attract other quality people. For this reason, @Theo Hawking’s imperative to pause and reflect on a)what EA considers a quality conversion and b)if current EA practices are attracting/repelling quality conversions is a worthy exercise.
On a final note, I suspect the comments about the free books or 10% tithing to charity heard from people to explain their “cult” label of EA are merely convenient justifications and don’t address the core of their impression. After all, why would they bother investing effort to pinpoint and articulate the sources of their general negative feeling about the movement if they’re already disengaged? I suspect that the “cult” feeling has more to do with the homogeneity and “group think” concerns I described above. To combat these negative impressions, I’d recommend:
Diversify your recruitment tactics. I particularly liked the suggestion about recruiting around specific cause areas mentioned by @Jamie Bernardi. I suspect this will also help with your talent gaps. Representation at adjacent conferences/events would also be a channel to reach established professionals. As I was exploring how I might do the most good before I heard of EA, I attended many events like the Data for Good Exchange 2019 (bloomberg.com) and would have been very receptive to hearing about EA there.
Emphasise the projects and the work. @Charles He hit the nail on the head. I would go even further than just aiming to have the best leaders in cause areas. Are EA orgs/work generally respected and well-regarded by other players in the cause area? In other words, does EA “play well with others”, or are you primarily operating in your own bubble? Suppose EA is objectively and demonstrably doing great work. In that case, other major players should be open to adopting similar practices and further magnifying the impact. If that’s not happening, does EA have the self-awareness to understand why and act upon it?
In conversations with outsiders, favour tangibleissues/outcomes and actionable ideas instead of thought experiments. (My perspective skews to the practical, so feel free to discount my emphasis on this point depending on your role in EA.) If the aim is to get more people excited about doing the most good, then describe the success of the Against Malaria Foundation or the scale of impact specific government policies may have rather than using the trolley problem to discuss utilitarianism. Yes, thought experiments are both fun and valuable, but there is a time and a place.
Be accepting of varying styles of communication around ideas and issues. Not everyone interested in cause areas or doing “the most good” will be fluent in philosophy or psychology. If we can communicate concepts, thoughts, or ideas reasonably and productively, it’s often unnecessary to derail the conversation on a pedantic tangent. Don’t treat me like an unenlightened pleb if you have to explain the connection for why you name-dropped a researcher I hadn’t heard of during our conversation. (This is somewhat tongue-in-cheek. :-) )
I hope my diatribe will be received constructively because I am invested in seeing EA succeed regardless if I consider myself an EA at the moment. Anecdata is not rigorous, so who knows how generalisable my data point is. However, upon reading this thread, I realised that my complicated disposition towards EA is not uncommon and decided to share my viewpoint. Whatever that’s worth. :-)
Thank you for raising this issue. You are in your 30s, I am in my 50s and I am part way through the Intro to EA program. If you can feel an outsider at 30 something, imagine how it might be for a 50 something.
These are briefly my thoughts
There is such a predominance of youth, there is a sense that much of this has not been thought about before and therefore my lived experience has not much merit. Yet I have lived the life of an EA even if it had no name.
There is a a certain complacency in the idea that EA is using science for decision making (I noted Toby Ord’s reference to that in a talk ) without perhaps remembering that scientists are simply biased humans too. Galton was a much lauded academic statistician but perfected eugenics.
I have a bias here as someone whose neurodiversity means I have significant issues with mathematical concepts but yet managed to understand the excess risk taken in the City in 2006. I left my legal role as I was exhausted defending the spread of the much praised skills of hedge funders etc. I remain convinced that there is a substantial failure to admit that pure human behaviours are very strong over-rulers. Dominant men had new toys and they would be used—something that I felt comes through strongly in that excellent Forum post on the race for the nuclear bomb and had already begun to come through to me around AI (I had created a short cut explanation in my head ‘oh it’s the usual race thing and some overpowerful man will just one day set something going because he can’).
It is hard to find answers to what feels like some very basic questions; such as the choice of charities on Give Well. It seems to me that some hard questions don’t even get asked, for eg why should charitable donations make good what a Nigerian government is failing to do in its own program to distribute Vitamin A? I have searched for criteria that might address the choice of charity but cannot find them. I do not understand why there is no prioritising vaccine for Malaria over reducing risk of catching. This is of particular interest to me as I was the founding trustee of a charity in the UK that has its parent in the US. My scout mindset has still found no reason to doubt my support of it. I wonder about the potential for Give Well acting as a funnel that might adversely affect other charities—creating its own neglectedness criteria. I raise these in the Intro discussions but there is no traction or explanation.
I hope that somehow I will find my place within the EA world—maybe I can set up “EA for Oldies; your contribution is relevant too”? I understand that I have only been looking at the Forum for a month or so; if someone can point to any area that does consider how those of us towards the end of our careers can contribute, I would be very grateful.
Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven’t seen it. I believe that he and collaborators are also considering launching a small project in this space.
Thanks for the lead! The post you linked seems perfectly suited to me. I’ll also contact Ben Snodin to inquire about what he may be working on around this matter.
While the post and this comment are now both ancient, I feel compelled to at least leave a short note here after reading them.
My background is in many ways similar to Sarah’s and I’ve came into the contact with the EA community about half a year ago. Unfortunately, 2.5 years later, most of the points raised here resonate heavily with my experiences. Especially the hive mentality, heavy focus on students (with little efforts towards professionals) and overemphasis on AI safety (or more generally—highly-specialized cause areas overshadowing the overall philosophy).
I don’t know what the solutions are but the problem seems to be still present.
Hi, thank you for starting this conversation! I am an EA outsider, so I hope my anecdata is relevant to the topic. (This is my first post on the forums.) I found my way to this post during an EA rabbit hole after signing up for the “Intro to EA” Virtual Program.
To provide some context, I heard about EA a few years ago from my significant other. I was/am very receptive to EA principles and spent several weeks browsing through various EA resources/material after we first met. However, EA remained in my periphery for around three years until I committed to giving EA a fair shake several weeks ago. This is why I decided to sign up for the VP.
I’m mid-career instead of enrolled in university, so my perspective is not wholly within the scope of the original post. However, I like to think that I have many qualities the EA community would like to attract:
I (dramatically) changed careers to pursue a role with a more significant positive impact and continue to explore how I can apply myself to do the “most good”.
I’m well-educated (1 bachelor’s degree & 2 master’s degrees)
As a scientist for many years, I value evidence-based decision-making and rationality, both professionally and personally.
I have professional experience managing multiple projects with large budgets and diverse stakeholders. This requires 3 of the six skills listed in the top talent gaps identified on your Leadership Forum (as mentioned in the original post).
As a data scientist, I have practical & technical expertise in machine learning (related to the last talent gap in the list mentioned above).
I’m open-minded. (No apparent objective evidence comes to mind. I suppose you’ll have to talk to me and verify for yourselves. :-) )
If we agree that EA would prefer to attract rather than “turn off” people with these qualities, then the following introspections regarding my resistance to participating in the movement may be helpful:
The heavy, heavy focus on university recruiting feels …. off.
First, let me emphasise that I understand all the practical reasons for focusing on student outreach. @Chris Long does a great job listing why this is an actionable, sensible strategy in this thread. I understand and sympathise with EA’s motivations. My following points are from an “EA outsider” perspective and others who may not care enough to consider the matter more deeply than their initial impression.
Personally, ‘cult’ didn’t immediately come to mind despite being a common criticism many of you encountered. Still, the aggressive focus on recruiting (primarily young) university students can seem a bit predatory. When there is a perceived imbalance in recruitment tactics, red flags can instinctively pop up in the back of people’s minds.
The EA community seems homogeneous—and not just demographically.
The homogeneity is a natural consequence of the heavy focus on university outreach. Whenever I encounter EAs, I’m generally the oldest … and I’m only in my 30′s! (Is there a place in this community if you’re not fresh out of uni?) The youthful skew of the community contributes to an impression that there is a small group of influential figures dictating the vision/strategy of the movement and a mass of idealistic, young recruits eager to execute on it. People who get things done want to find other people who get things done. It’s not reassuring if it feels like the movement is filled with young (albeit talented & intelligent) people who can barely be trusted with leading student groups (requiring scripts, strict messaging, etc.).
Since the aggressive university outreach focuses on prestigious institutions, the group can seem elitist. Again, I understand the cold realities of this world mean that there are practical considerations for supporting this approach. As an outsider, it isn’t easy to discern if the pervasive mentions of top institutions are for practicality or for signalling. I also understand the importance of epistemic alignment. However, when the EA Global application requirement (as an example) is juxtaposed alongside aggressive recruitment at top universities, it starts to seem like EA is looking for “the right kind of people” to join their club in a less benign sense. Admittedly, I have a giant chip on my shoulder from my upbringing on the wrong side of the socio-economic tracks. Even with that self-awareness (and a Berkeley degree), some of my hesitancy to engage is the concern that my value to the community would not be judged mainly on the merit of my contributions but rather on my academic pedigree. I value my time and energy too much to play those games.
Breaking into the hive mind
EAs seem uniformly well-informed and studied on a core body of seminal studies, books, websites, and influential figures. Objectively, it’s a credit to your community that there is such high engagement and consistency in your messaging. To an outsider, it feels like a steep learning curve before being considered a “real EA”. (Is there an admission exam or something? Do I need to recite Peter Singer from memory? :-) ) This is more of a compliment than anything. Maybe just be mindful of what you’re trying to achieve when you name-drop, cite a study, or reference philosophy terminology in conversation. Is the motivation in doing so to communicate clearly or to posture? At best, EA newbies may feel intimidated. At worst, they/we may get defensive.
To a natural sceptic and critical thinker, the uniformity also feels a little like indoctrination. What are the areas of active constructive disagreement? Does the community accept (or even encourage) dissenting (but well-reasoned) opinions? What are the different positions? What are the caveats of the seminal studies? It’s not apparent on the surface, and free-thinkers are generally repelled at the notion of conformity for the sake of belonging. (In the “Intro to EA” Virtual Program syllabus, I noticed that there is attention to EA critiques. I’m looking forward to experiencing how that conversation is facilitated.)
Does EA care about anything other than AI safety nowadays?
I’ve read about all these significant EA initiatives tackling malaria, global poverty, factory farming, etc., during my first exploration of the movement a few years ago. But nowadays, it seems that all I hear about is AI safety. Considering how challenging it is to forecast existential risk, are you really so confident that this one cause is the most impactful, most neglected, and most tractable that it warrants overshadowing all the other causes? I agree that AI safety is an important cause that deserves attention. However, the fervour around it seems awfully reminiscent of the “Peak of Inflated Expectations” on the Gartner Hype Cycle. It’s not so much that I have anything against AI safety, in particular. The impression of “hype” itself is not a great look if someone is seeking a community of critical thinkers to engage. Combined with the homogeneity of the community, it makes me suspicious of “group think”.
I want to explicitly state that I know that not all of these impressions are entirely true. I know that EAs aren’t all out-of-touch, pretentious jerks. The 80,000 hours job board has several postings across many cause areas aside from AI safety. The impressions described above are primarily from my perspective before actively trying to vet my concerns. However, I imagine that others who share these impressions don’t bother to validate their concerns before dismissing the movement.
So why did I go through the trouble of digging deeper? Well, probably because EA is the closest I’ve found to a community consistent with my own values, motivations, and interests. Despite my reservations, I really want my concerns to be wrong and for EA to work. More importantly, I’ve grown to trust the values, motivations, judgement, and competency of my significant other, who is committed to EA’s mission. Through him, I’ve met other EAs who are also great people. Quality people tend to attract other quality people. For this reason, @Theo Hawking’s imperative to pause and reflect on a)what EA considers a quality conversion and b)if current EA practices are attracting/repelling quality conversions is a worthy exercise.
On a final note, I suspect the comments about the free books or 10% tithing to charity heard from people to explain their “cult” label of EA are merely convenient justifications and don’t address the core of their impression. After all, why would they bother investing effort to pinpoint and articulate the sources of their general negative feeling about the movement if they’re already disengaged? I suspect that the “cult” feeling has more to do with the homogeneity and “group think” concerns I described above. To combat these negative impressions, I’d recommend:
Diversify your recruitment tactics. I particularly liked the suggestion about recruiting around specific cause areas mentioned by @Jamie Bernardi. I suspect this will also help with your talent gaps. Representation at adjacent conferences/events would also be a channel to reach established professionals. As I was exploring how I might do the most good before I heard of EA, I attended many events like the Data for Good Exchange 2019 (bloomberg.com) and would have been very receptive to hearing about EA there.
Emphasise the projects and the work. @Charles He hit the nail on the head. I would go even further than just aiming to have the best leaders in cause areas. Are EA orgs/work generally respected and well-regarded by other players in the cause area? In other words, does EA “play well with others”, or are you primarily operating in your own bubble? Suppose EA is objectively and demonstrably doing great work. In that case, other major players should be open to adopting similar practices and further magnifying the impact. If that’s not happening, does EA have the self-awareness to understand why and act upon it?
In conversations with outsiders, favour tangible issues/outcomes and actionable ideas instead of thought experiments. (My perspective skews to the practical, so feel free to discount my emphasis on this point depending on your role in EA.) If the aim is to get more people excited about doing the most good, then describe the success of the Against Malaria Foundation or the scale of impact specific government policies may have rather than using the trolley problem to discuss utilitarianism. Yes, thought experiments are both fun and valuable, but there is a time and a place.
Be accepting of varying styles of communication around ideas and issues. Not everyone interested in cause areas or doing “the most good” will be fluent in philosophy or psychology. If we can communicate concepts, thoughts, or ideas reasonably and productively, it’s often unnecessary to derail the conversation on a pedantic tangent. Don’t treat me like an unenlightened pleb if you have to explain the connection for why you name-dropped a researcher I hadn’t heard of during our conversation. (This is somewhat tongue-in-cheek. :-) )
I hope my diatribe will be received constructively because I am invested in seeing EA succeed regardless if I consider myself an EA at the moment. Anecdata is not rigorous, so who knows how generalisable my data point is. However, upon reading this thread, I realised that my complicated disposition towards EA is not uncommon and decided to share my viewpoint. Whatever that’s worth. :-)
Thanks so much for sharing your thoughts in such detail here :)
Thank you for raising this issue. You are in your 30s, I am in my 50s and I am part way through the Intro to EA program. If you can feel an outsider at 30 something, imagine how it might be for a 50 something.
These are briefly my thoughts
There is such a predominance of youth, there is a sense that much of this has not been thought about before and therefore my lived experience has not much merit. Yet I have lived the life of an EA even if it had no name.
There is a a certain complacency in the idea that EA is using science for decision making (I noted Toby Ord’s reference to that in a talk ) without perhaps remembering that scientists are simply biased humans too. Galton was a much lauded academic statistician but perfected eugenics.
I have a bias here as someone whose neurodiversity means I have significant issues with mathematical concepts but yet managed to understand the excess risk taken in the City in 2006. I left my legal role as I was exhausted defending the spread of the much praised skills of hedge funders etc. I remain convinced that there is a substantial failure to admit that pure human behaviours are very strong over-rulers. Dominant men had new toys and they would be used—something that I felt comes through strongly in that excellent Forum post on the race for the nuclear bomb and had already begun to come through to me around AI (I had created a short cut explanation in my head ‘oh it’s the usual race thing and some overpowerful man will just one day set something going because he can’).
It is hard to find answers to what feels like some very basic questions; such as the choice of charities on Give Well. It seems to me that some hard questions don’t even get asked, for eg why should charitable donations make good what a Nigerian government is failing to do in its own program to distribute Vitamin A? I have searched for criteria that might address the choice of charity but cannot find them. I do not understand why there is no prioritising vaccine for Malaria over reducing risk of catching. This is of particular interest to me as I was the founding trustee of a charity in the UK that has its parent in the US. My scout mindset has still found no reason to doubt my support of it. I wonder about the potential for Give Well acting as a funnel that might adversely affect other charities—creating its own neglectedness criteria. I raise these in the Intro discussions but there is no traction or explanation.
I hope that somehow I will find my place within the EA world—maybe I can set up “EA for Oldies; your contribution is relevant too”? I understand that I have only been looking at the Forum for a month or so; if someone can point to any area that does consider how those of us towards the end of our careers can contribute, I would be very grateful.
Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven’t seen it. I believe that he and collaborators are also considering launching a small project in this space.
Thanks for the lead! The post you linked seems perfectly suited to me. I’ll also contact Ben Snodin to inquire about what he may be working on around this matter.
For onlookers, there’s also a website by Ben (my coworker) and Claire Boine.
While the post and this comment are now both ancient, I feel compelled to at least leave a short note here after reading them.
My background is in many ways similar to Sarah’s and I’ve came into the contact with the EA community about half a year ago. Unfortunately, 2.5 years later, most of the points raised here resonate heavily with my experiences. Especially the hive mentality, heavy focus on students (with little efforts towards professionals) and overemphasis on AI safety (or more generally—highly-specialized cause areas overshadowing the overall philosophy).
I don’t know what the solutions are but the problem seems to be still present.