I have been community building in Cambridge UK in some way or another since 2015, and have shared many of these concerns for some time now. Thanks so much for writing them much more eloquently than I would have been able to, thanks!
To add some more anecdotal data, I also hear the ‘cult’ criticism all the time. In terms of getting feedback from people who walk away from us: this year, an affiliated (but non-EA), problem-specific table coincidentally ended up positioned downstream of the EA table at a freshers’ fair. We anecdotally overheard approx 10 groups of 3 people discussing that they thought EA was a cult, after they had bounced from our EA table. Probably around 2000-3000 people passed through, so this is only 1-2% of people we overheard.
I managed to dig into these criticisms a little with a couple of friends-of-friends outside of EA, and got a couple of common pieces of feedback which it’s worth adding.
We are giving away many free books lavishly. They are written by longstanding members of the community. These feel like doctrine, to some outside of the community.
Being a member of the EA community is all or nothing. My best guess is we haven’t thought of anything less intensive to keep people occupied due the historical focus on HEAs, where we are looking for people who make EA their ‘all’ (a point well made in this post).
Personally, I think one important reason the situation is different now to how it was some years ago is EA has grown in size and influence since 2015. It’s more likely someone has encountered it online, via 80k or some podcast. In larger cities, it’s more likely individuals know friends who has been to an EA event. I think we have ‘got away with’ people thinking it’s a cult for a while because not enough people knew about EA. I like to say that the R rate of gossip was < 1, so it didn’t proliferate. I feel we’re nearing or passing a tipping point that discussing EA without EA members present becomes an interesting topic of conversation for non-EAs, since people can relate and have all had personal experiences with the movement.
In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity. Important caveats for the record, I still think EA-aligned motivations are important, and I am still a big supporter of the EA Cambridge group, and I think it is run by conscientious people with good support networks :-)
In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity.
The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
“Too much” spending
billionaire funding/asking people to donate income
most “epistemic issues”, especially with success in multiple cause areas
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.
I broadly agree with this, but at least with AI safety there’s a Goodharting issue: we don’t want AIS researchers optimising for legibly impressive ideas/results/writeups.
I assume there’s a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues)
There’s a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there’ll be more engagement with more readable content.
None of this is to say that I know e.g. MIRI-style research to be the right approach. However, I do think we need to be careful not to optimise for the appearance of strong object level work.
When I was working for EA London in 2018, we also had someone tell us that the free books thing made us look like a cult and they made the comparison with free Bibles.
It’s also a nice nudge for people to read the books (I remember reading Doing Good Better in a couple of weeks because a friend/organiser had lent it to me and I didn’t want to keep him waiting).
I believe that EA could tone down the free books by 5-10% but I am pretty skeptical that the books program is super overboard.
I have 50+ books I’ve gotten at events over the past few years (when I was in college), mostly politics/econ/phil stuff the complete works of John Stuart Mill and Adam Smith, Myth of the Rational Voter, Elephant in the Brain, Three Languages of Politics, etc (all physical books). Bill Gates’ book has been given out as a free PDF recently.
So I don’t think EA is a major outlier here. I also like that there are some slightly less “EA books” in the mix like the Scout Mindset and The AI Does Not Hate You.
I think it’s not free books per se, but free books related to phrases “here’s what’s really important”, “this is how to think about morality” that are problematic in the context of the Bible comparison
I’m not sure what campus EA practices are like—but, in between pamphlets and books, there are zines. Low-budget, high-nonconformity, high-persuasion. Easy for students to write their own, or make personal variations, instead of treating like official doctrine. ie, https://azinelibrary.org/zines/
Nice. And when it comes to links, ~half the time I’ll send someone a link to the Wikipedia page on EA or longtermism rather than something written internally.
The criticisms of EA movement building tactics that we hear are not necessarily the ones that are most relevant to our movement goals. Specifically, I’m hesitant to update much on a few 18 year olds who decide we’re a “cult” after a few minutes of casual observation at a fresher’s fair. I wouldn’t want to be part of a movement that eschewed useful tools for better-integrating its community because it’s afraid of the perception of a few sarcastic teenagers.
Instead, I’m interested in learning about the critiques of EA put forth by highly-engaged EAs, non-EAs, semi-EAs, and ex-EAs who care about or share at least some of our movement goals, have given them a lot of thought, are generally capable people, and have decided that participation in the EA movement is therefore not for them.
I made this comment with the assumption that some of these people could have extremely valuable skills to offer to the problems this community cares about. These are students at a top uni in the UK for sciences, and many of whom go on to be significantly influential in politics and business, much higher than the base rate at other unis or average population.
I agree not every student fits this category, or is someone who will ever be inclined towards EA ideas. However I don’t know if we are claiming that being in this category (e.g. being in the top N% at Cambridge) correlates with a more positive baseline-impression of EA community building? Maybe the more conscientious people weren’t ringleaders in making the comments, but they will definitely hear them which I think could have social effects.
I agree that EA will not be for everyone, and we should seek good intellectual critiques from those people that disagree on an intellectual basis. But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our strategy could be nudged to achieve better impressions at people’s first encounter with EA, we could capture more of this talent and direct them to the world’s biggest problems. Community building strategy feels much more malleable than the content of our ideas or common conclusions, which we might indeed want to be more bullish about.
I do accept the optimal approach to community building will still turn some people off, but it’s worth thinking about this intentionally. As EA grows, CB culture gets harder to fix (if it’s not already too large to change course significantly).
I also didn’t clarify this in my original comment. It was my impression that many of them had had already encountered EA, rather than them having picked this up from the messaging of the table. It’s been too long to confirm for sure now, and more surveying would help to confirm. This would not be surprising though, as EA has a large presence at Cambridge than most other unis (and not everyone at freshers’ fair is a first year, many later-stage students attend to pick up new hobbies or whatever).
But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our strategy could be nudged to achieve better impressions at people’s first encounter with EA, we could capture more of this talent and direct them to the world’s biggest problems.
Another way of stating this is that we want to avoid misdirecting talent away from the world’s biggest problems. This might occur if EA has identified those problems, effectively motivates its high-aptitude members to work on them, but fails to recruit the maximum number of high-aptitude members, due to CB strategies optimized for attracting larger numbers of low-aptitude members.
This is clearly a possible failure mode for EA.
The epistemic thrust of the OP is that we may be missing out on information that would allow us to determine whether or not this is so, largely due to selection and streetlamp effects.
Anecdata is a useful starting place for addressing this concern. My objective in my comment above is to point out that this is, in the end, just anecdata, and to question the extent to which we should update on it. I also wanted to focus attention on the people who I expect to have the most valuable insights about how EA could be doing better at attracting high-aptitude members; I expect that most of these people are not the sort of folks who refer to EA as a “cult” from the next table down at a Cambridge fresher’s fair, but I could be wrong about that.
In addition, I want to point out that the character models of “Alice” and “Bob” are the merest speculation. We can spin other stories about “Cindy” and “Dennis” in which the smart, independent-minded skeptic is attracted to EA, and the aimless believer is attracted to some other table at the fresher’s fair. We can also spin stories in which CB folks wind up working to minimize the perception that EA is a cult, and this having a negative impact on high-talent recruitment.
I am very uncertain about all this, and I hope that this comes across as constructive.
Nice to see you here, Ferenc! We’ve talked before when I was at OpenAI and you Twitter, and always happy to chat if you’re pondering safety things these days.
I have been community building in Cambridge UK in some way or another since 2015, and have shared many of these concerns for some time now. Thanks so much for writing them much more eloquently than I would have been able to, thanks!
To add some more anecdotal data, I also hear the ‘cult’ criticism all the time. In terms of getting feedback from people who walk away from us: this year, an affiliated (but non-EA), problem-specific table coincidentally ended up positioned downstream of the EA table at a freshers’ fair. We anecdotally overheard approx 10 groups of 3 people discussing that they thought EA was a cult, after they had bounced from our EA table. Probably around 2000-3000 people passed through, so this is only 1-2% of people we overheard.
I managed to dig into these criticisms a little with a couple of friends-of-friends outside of EA, and got a couple of common pieces of feedback which it’s worth adding.
We are giving away many free books lavishly. They are written by longstanding members of the community. These feel like doctrine, to some outside of the community.
Being a member of the EA community is all or nothing. My best guess is we haven’t thought of anything less intensive to keep people occupied due the historical focus on HEAs, where we are looking for people who make EA their ‘all’ (a point well made in this post).
Personally, I think one important reason the situation is different now to how it was some years ago is EA has grown in size and influence since 2015. It’s more likely someone has encountered it online, via 80k or some podcast. In larger cities, it’s more likely individuals know friends who has been to an EA event. I think we have ‘got away with’ people thinking it’s a cult for a while because not enough people knew about EA. I like to say that the R rate of gossip was < 1, so it didn’t proliferate. I feel we’re nearing or passing a tipping point that discussing EA without EA members present becomes an interesting topic of conversation for non-EAs, since people can relate and have all had personal experiences with the movement.
In my own work now, I feel much more personally comfortable leaning into cause area-specific field building, and groups that focus around a project or problem. These are much more manageable commitments, and can exemplify the EA lens of looking at a project without it being a personal identity. Important caveats for the record, I still think EA-aligned motivations are important, and I am still a big supporter of the EA Cambridge group, and I think it is run by conscientious people with good support networks :-)
The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
“Too much” spending
billionaire funding/asking people to donate income
most “epistemic issues”, especially with success in multiple cause areas
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.
I broadly agree with this, but at least with AI safety there’s a Goodharting issue: we don’t want AIS researchers optimising for legibly impressive ideas/results/writeups.
I assume there’s a similar-in-principle issue for most cause areas, but it does seem markedly worse for AIS. (given lack of meaningful feedback on the most important issues)
There’s a significant downside even in having some proportion of EA AIS researchers focus on more legible results: it gives a warped impression of useful AIS research to outsiders. This happens by default, since there are many incentives to pick a legibly impressive line of research, and there’ll be more engagement with more readable content.
None of this is to say that I know e.g. MIRI-style research to be the right approach.
However, I do think we need to be careful not to optimise for the appearance of strong object level work.
I agree and think this is an argument for investing in cause specific groups rather than generalized community building.
When I was working for EA London in 2018, we also had someone tell us that the free books thing made us look like a cult and they made the comparison with free Bibles.
One option here could be to lend books instead. Some advantages:
Implies that when you’re done reading the book you don’t need it anymore, as opposed to a religious text which you keep and reference.
While the distributors won’t get all the books back (and that’s fine) the books they do get back they can lend out again.
Less lavish, both in appearance and in reality.
This is what we do at our meetups in Boston.
It’s also a nice nudge for people to read the books (I remember reading Doing Good Better in a couple of weeks because a friend/organiser had lent it to me and I didn’t want to keep him waiting).
I believe that EA could tone down the free books by 5-10% but I am pretty skeptical that the books program is super overboard.
I have 50+ books I’ve gotten at events over the past few years (when I was in college), mostly politics/econ/phil stuff the complete works of John Stuart Mill and Adam Smith, Myth of the Rational Voter, Elephant in the Brain, Three Languages of Politics, etc (all physical books). Bill Gates’ book has been given out as a free PDF recently.
So I don’t think EA is a major outlier here. I also like that there are some slightly less “EA books” in the mix like the Scout Mindset and The AI Does Not Hate You.
I think it’s not free books per se, but free books related to phrases “here’s what’s really important”, “this is how to think about morality” that are problematic in the context of the Bible comparison
I’m not sure what campus EA practices are like—but, in between pamphlets and books, there are zines. Low-budget, high-nonconformity, high-persuasion. Easy for students to write their own, or make personal variations, instead of treating like official doctrine. ie, https://azinelibrary.org/zines/
Nice. And when it comes to links, ~half the time I’ll send someone a link to the Wikipedia page on EA or longtermism rather than something written internally.
The criticisms of EA movement building tactics that we hear are not necessarily the ones that are most relevant to our movement goals. Specifically, I’m hesitant to update much on a few 18 year olds who decide we’re a “cult” after a few minutes of casual observation at a fresher’s fair. I wouldn’t want to be part of a movement that eschewed useful tools for better-integrating its community because it’s afraid of the perception of a few sarcastic teenagers.
Instead, I’m interested in learning about the critiques of EA put forth by highly-engaged EAs, non-EAs, semi-EAs, and ex-EAs who care about or share at least some of our movement goals, have given them a lot of thought, are generally capable people, and have decided that participation in the EA movement is therefore not for them.
I made this comment with the assumption that some of these people could have extremely valuable skills to offer to the problems this community cares about. These are students at a top uni in the UK for sciences, and many of whom go on to be significantly influential in politics and business, much higher than the base rate at other unis or average population.
I agree not every student fits this category, or is someone who will ever be inclined towards EA ideas. However I don’t know if we are claiming that being in this category (e.g. being in the top N% at Cambridge) correlates with a more positive baseline-impression of EA community building? Maybe the more conscientious people weren’t ringleaders in making the comments, but they will definitely hear them which I think could have social effects.
I agree that EA will not be for everyone, and we should seek good intellectual critiques from those people that disagree on an intellectual basis. But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our strategy could be nudged to achieve better impressions at people’s first encounter with EA, we could capture more of this talent and direct them to the world’s biggest problems. Community building strategy feels much more malleable than the content of our ideas or common conclusions, which we might indeed want to be more bullish about.
I do accept the optimal approach to community building will still turn some people off, but it’s worth thinking about this intentionally. As EA grows, CB culture gets harder to fix (if it’s not already too large to change course significantly).
I also didn’t clarify this in my original comment. It was my impression that many of them had had already encountered EA, rather than them having picked this up from the messaging of the table. It’s been too long to confirm for sure now, and more surveying would help to confirm. This would not be surprising though, as EA has a large presence at Cambridge than most other unis (and not everyone at freshers’ fair is a first year, many later-stage students attend to pick up new hobbies or whatever).
Another way of stating this is that we want to avoid misdirecting talent away from the world’s biggest problems. This might occur if EA has identified those problems, effectively motivates its high-aptitude members to work on them, but fails to recruit the maximum number of high-aptitude members, due to CB strategies optimized for attracting larger numbers of low-aptitude members.
This is clearly a possible failure mode for EA.
The epistemic thrust of the OP is that we may be missing out on information that would allow us to determine whether or not this is so, largely due to selection and streetlamp effects.
Anecdata is a useful starting place for addressing this concern. My objective in my comment above is to point out that this is, in the end, just anecdata, and to question the extent to which we should update on it. I also wanted to focus attention on the people who I expect to have the most valuable insights about how EA could be doing better at attracting high-aptitude members; I expect that most of these people are not the sort of folks who refer to EA as a “cult” from the next table down at a Cambridge fresher’s fair, but I could be wrong about that.
In addition, I want to point out that the character models of “Alice” and “Bob” are the merest speculation. We can spin other stories about “Cindy” and “Dennis” in which the smart, independent-minded skeptic is attracted to EA, and the aimless believer is attracted to some other table at the fresher’s fair. We can also spin stories in which CB folks wind up working to minimize the perception that EA is a cult, and this having a negative impact on high-talent recruitment.
I am very uncertain about all this, and I hope that this comes across as constructive.
A friendly hello from your local persuasion-resistant moderately EA-skeptic hole-picker :)
Nice to see you here, Ferenc! We’ve talked before when I was at OpenAI and you Twitter, and always happy to chat if you’re pondering safety things these days.