Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally—but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community—especially given that their major investors did not know—but it’s a symptom that reinforces many of the earlier complaints.
The solutions seem unclear, but there are two very different paths that would address the failure—either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.
“The community” failed?
Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.
The community was trusting—in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.
But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.”
A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.
Failures of Trust
There are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.
To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.
I think during my second year, someone had something stolen from their desk—I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expelled, and that they really didn’t want to review the security camera footage, but they would if they needed to. It never occurred to me that there were cameras—but of course there were, if only because RAND has a secure classified facility on campus, and security officers that occasionally needed to respond to crazy people showing up at the gate. That meant they could trust, because they can verify.
Similarly, the time sheets for billing research projects, which was how everyone including the grad student got paid, got reviewed. I know that there were flags, because another graduate student I knew was pulled in and questioned for billing two 16-hour days one week. (They legitimately had worked insane hours those two days to get data together to hit a deadline—but someone was checking.) You can certainly have verifiably high trust environments if it’s hard to cheat and not get caught.
But EA was, until now, a high-trust group by default. It’s a huge advantage in working with others. knowing you are value aligned, that you can assume others care, and that you can trust them, means coordination is far easier. The FTX incident has probably partly destroyed that. (And if not, it should at least cause a serious reevaluation of how we use social trust within EA.)
Restoring Trust?
I don’t think that returning to a high-trust default is an option. Instead, if we want to reestablish high trust throughout the community, we need to do so by fixing the lack of basis for the trust—and that means institutionalization and centralizing. For example, we might need institutions to “credential” EA members or at least institutions, perhaps to allow democratic control, or at least clarity about membership. Alternatively, we could double-down on centralizing EA as a movement, putting even more power and responsibility on whoever ends up in charge—a more anti-democratic exercise.
However we manage to rebuild trust, it’s going to be expensive and painful as a transition—but if you want a large and growing high trust community, it can’t really be avoided. I don’t think that what Cremer and Kemp suggest is the right approach, nor are Cremer’s suggestions to MacAskill sufficient for a large and growing movement, but some are necessary, and if those measures are not taken, I think that the community should be announcing alternative structures sooner rather than later.
This isn’t just about trust, though. We’ve also seen allegations that EA as a community is too elitist, that it’s not a safe place for women, that it’s not diverse enough, and so on. These are all problems to address, but they are created by a single decision—to have an EA community at all. And the easy answer to many problems is to have a central authority, and build more bureaucracy. But is that a good idea?
The alternative is rethinking whether EA should exist as a community at all. And—please save the booing for the end—maybe it shouldn’t be one.
What would it mean for Effective Altruism to not be a global community?
Obviously, I’m not in charge of the global EA community. No one is, not even CEA, with a mission “dedicated to building and nurturing a global community.” Instead, individuals, and by extension, local and international communities are in charge of themselves. Clearly, nobody needs to listen to me. But nobody needs to listen to the central EA organizations either—and we don’t need to, and should not, accept the status quo.
I want to explore the claim that trying to have a single global community is, on net, unhelpful, and what the alternative looks like. I’m sure this will upset people, and I’m not saying the approach outlined below is necessarily the right one—but I do think it’s a pathway we, yes, as a community, should at least consider.
And I have a few ideas what a less community-centered EA might look like. To preface the ideas, however, “community” isn’t binary. And even at the most extreme, abandoning the idea of EA as a community would not mean banning hanging out with other people inspired by the idea of Effective Altruism, nor would it mean not staying in touch with current friends. It would also not mean canceling meet-ups or events. But it does have some specific implications, which I’ll try to explore.
Personal Implications
First, it means that “being an EA” would not be an identity.
This is probably epistemically healthy—the natural tendency to defend an in-group is far worse when attacks seem to include you, instead of attacking a philosophy you admire, or other individuals who like the same philosophy. I don’t feel attacked when someone says that some guy who reads books by Naomi Novik is a jerk[1], so why should I feel attacked when someone says a person who read and agreed with “Doing Good Better” or “The Precipice” is a jerk?
Not having EA as an identity would also mean that public relations stops being a thing that a larger community cares about—thankfully. Individual organizations would, of course, do their own PR, to the extent that it was useful. This seems like a great thing—concern about community PR isn’t a good thing for anyone to care about. We should certainly be concerned about ethics, and not doing bad things, not the way it looks.
Community Building Implications
Not having EA as a community obviously implies that “EA Community Building” as a cause area, especially a monolithic one, should end. But I think in retrospect, explicitly endorsing this as a cause to champion was a mistake. Popularizing ideas is great, bringing people with related interests is helpful, but there are some really unhealthy dynamics that were created, and fixing them seems harder than simply abandoning the idea, and starting over.
This would mean that we stopped doing “recruitment” on college campuses—which was always somewhat creepy. Individual EAs on campus would presumably still tell their friends about the awesome ideas, recommend books, or even host reading groups—but these would be aimed at convincing individuals to consider the ideas, not to “join EA.” And individuals in places with other EAs would certainly be welcome to tell friends and have meet-ups. But these wouldn’t be thought of as recruitment, and they certainly wouldn’t be subsidized centrally.
Wouldn’t this be bad?
CEA’s web site says “Effective altruism has been built around a friendly, motivated, interesting, and interested group of people from all over the world. Participating in the community has a number of advantages over going it alone.” Would it really be helpful to abandon this?
My answer, tentatively, is yes. Communities work well with small numbers of people, and less well as they grow. A single global community isn’t going to allow high trust without building, in effect, a church. I’m fairly convinced that Effective Altruism has grown past the point where a single community can be safe and high trust without hierarchy and lots of structure, and don’t know that there’s any way for that to be done effectively or acceptably.
Of course, individuals want and need communities—local communities, communities of shared interest, communities of faith, and so on. But putting the various parts of effective altruism into a single community, I would argue, was a mistake.
More Implications, and some Q&A
Would this mean no longer having community building grants, or supporting EA-community institutions?
First, I think that we should expect communities to be self-supporting, outside of donor dollars. Having work spaces and similar is great, but it’s not an impartially altruistic act to give yourself a community. It’s much too easy to view self-interested “community building” as actually altruistic work, and a firewall would be helpful.
Given that, I strongly think that most EAs would be better off giving their 10% to effective charities focused on the actual issues, and then paying dues or voluntarily contributing other, non-EA-designated funds for community building. That seems healthier for the community, and as a side-benefit, removes the current centralized “control” of EA communities, which are dependent on CEA or other groups.
There are plenty of people who are trying to give far more than 10% of their income. Communities are great—but paying for them is a personal expense, not altruism. And from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism. I would far rather have people giving “only” 10% to charity, and using their other money for paying dues towards hosting or helping to subsidize fun events for others in their community, or paying to work in a EA-aligned coworking space.
Similarly, college students and groups that wanted to run reading clubs about EA topics would be more than welcome to ask alumni or others to support them. There is a case to be made for individuals spending money to subsidize that—but things like community retreats should be paid for by attendees, or at most, should be subsidized with money that wasn’t promised to altruistic causes.
What about EA Global?
I think it would mean the end of “EA Global” as a generic conference. I have never attended, but I think having conferences where people can network is great—however, the way these are promoted and paid for is not. Davos is also a conference for important people to network—and lots of good things are done there, I am sure. We certainly should not be aiming for having an EA equivalent.
Instead, I would hope the generic EA global events are replaced by cause and career specific conferences, which would be more useful at the object level. I also think that having people pay to attend is good, instead of having their hotel rooms and flights paid for. If there are organizations or local groups that send people, they would be welcome to pay on behalf of the attendees, since they presumably get value from doing so. And if there are individuals who can’t otherwise afford it, or under-represented groups or locations, scholarships can be offered, paid for in part by the price paid by other attendees, or other conference sponsors. (Yes, conferences are usually sponsored, instead of paid for by donations.)
Wouldn’t this make it harder for funders to identify promising younger values aligned people early?
Yes, it would. But that actually seems good to me—we want people to demonstrate actual ability to have impact, not willingness to attend paid events at top colleges and network their way into what is already a pretty exclusive club.
Wouldn’t this tilt EA funders towards supporting more legibly high-status people at top schools?
It could, and that would be a failure in design of the community. And that seems bad to me, but it should be countered with more explicitly egalitarian efforts to find high-promise people who didn’t have parents who attended Harvard. But that isn’t what paid and exclusive conferences will address. Effective Altruism doesn’t have the best track record in this regard, and remedies are needed—but preserving the status quo isn’t a way to fix the problem.
Should CEA be defunded, or blamed for community failures?
No, obviously not. This post does explicitly attack some of their goals, and I hope this is taken in the spirit it is intended—as exploration and hopefully constructive criticism. They do tons of valuable work, which shouldn’t go away. If others agree that the current dynamics should change, I am still unsure how radically CEA should change direction. But if the direction I suggest is something that community members think is worth considering, CEA is obviously the key organization which would need to change.
Is this really a good idea?
It certainly isn’t something to immediately do in 2023, but I do think it’s a useful direction for EA to move towards. And directionally, I think it’s probably correct—though working out the exact direction and how it should be done is something that should be discussed.
And even if people dislike the idea, I hope it will prompt discussion of where current efforts have gone wrong. We should certainly be planning for the slightly-less-than-immediate term, and publicly thinking about the direction of the movement. We need to take seriously the question of what EA looks like in another decade or two, and I haven’t seen much public thinking about that question. (Perhaps longtermism has distracted people from thinking on the scale of single decades. Unfortunately.)
But rarely is a new direction something one person outlines, and everyone decides to pursue. (If so, the group is much too centrally controlled—something the founders of EA have said they don’t want.) And I do think that something like this is at least one useful path forward for EA.
If EA individuals and groups take more of this direction, I think it could be good, but details matter. At the same time, trajectory changes for large groups are slow, and should be deliberated about. So the details I’ve outlined have been trying to push the envelope, and prompt consideration of a different path we could take than the one we are on.
EA is a global community—but should it be?
Without trying to wade into definitions, effective altruism is not just a philosophy and a plan of action, it’s also a community. And that means that community dynamics are incredibly important in shaping both the people involved, and the ideas. Healthy communities can make people happier, more effective, and better citizens locally and globally—but not all communities are healthy. A number of people have voiced concerns about the EA community in the recent past, and I said at the time that I think that we needed to take those concerns seriously. The failure of the community to realize what was happening with FTX isn’t itself an indictment of the community—especially given that their major investors did not know—but it’s a symptom that reinforces many of the earlier complaints.
The solutions seem unclear, but there are two very different paths that would address the failure—either reform, or rethinking the entire idea of EA as a community. So while people are thinking about changes, I’d like to suggest that we not take the default path of least resistance reforms, at least without seriously considering the alternative.
“The community” failed?
Many people have said that the EA community failed when they didn’t realize what SBF was doing. Others have responded that no, we should not blame ourselves. (As an aside, when Eliezer Yudkowsky is telling you that you’re overdoing heroic responsibility, you’ve clearly gone too far.) But when someone begins giving to EA causes, whether individually, or via Founders Pledge, or via setting up something like SFF, there is no-one vetting them for being honest or well controlled.
The community was trusting—in this case, much too trusting. And people have said that they trusted the apparent (but illusory) consensus of EAs about FTX. I am one of them. We were all too trusting of someone who, according to several reports, had a history of breaking rules and cheating others, including an acrimonious split that happened early on at Alameda, and evidently more recently frontrunning. But the people who raised flags were evidently ignored, or in other cases feared being pariahs for speaking out more publicly.
But the idea that I and others trusted in “the community” is itself a problem. Like Rob Wiblin, I generally subscribe to the idea that most people can be trusted. But I wasn’t sufficiently cautious about how trust that applies to “you won’t steal from my wallet, even if you’re pretty sure you can get away with it,” doesn’t scale to “you can run a large business or charity with effectively no oversight.”
A community that trusts by default is only sustainable if it is small. Claiming to subscribe to EA ideas, especially in a scenario where you can be paid well to do so, isn’t much of a reason to trust anyone. And given the size of the EA community, we’ve already passed the limits of where trusting others because of shared values is viable.
Failures of Trust
There are two ways to have high trust: naivety, and sophistication. The naive way is what EA groups have employed so far, and the sophisticated way requires infrastructure to make cheating difficult and costly.
To explain, when I started in graduate school, I entered a high-trust environment. I never thought about it, partly because I grew up in a religious community that was high trust. So in grad school, I was comfortable if I left my wallet on my desk when going to the bathroom, or even sometimes when I had an hour-long meeting elsewhere in the building.
I think during my second year, someone had something stolen from their desk—I don’t recall what, maybe it was a wallet. We all received an email saying that if someone took it, they would be expelled, and that they really didn’t want to review the security camera footage, but they would if they needed to. It never occurred to me that there were cameras—but of course there were, if only because RAND has a secure classified facility on campus, and security officers that occasionally needed to respond to crazy people showing up at the gate. That meant they could trust, because they can verify.
Similarly, the time sheets for billing research projects, which was how everyone including the grad student got paid, got reviewed. I know that there were flags, because another graduate student I knew was pulled in and questioned for billing two 16-hour days one week. (They legitimately had worked insane hours those two days to get data together to hit a deadline—but someone was checking.) You can certainly have verifiably high trust environments if it’s hard to cheat and not get caught.
But EA was, until now, a high-trust group by default. It’s a huge advantage in working with others. knowing you are value aligned, that you can assume others care, and that you can trust them, means coordination is far easier. The FTX incident has probably partly destroyed that. (And if not, it should at least cause a serious reevaluation of how we use social trust within EA.)
Restoring Trust?
I don’t think that returning to a high-trust default is an option. Instead, if we want to reestablish high trust throughout the community, we need to do so by fixing the lack of basis for the trust—and that means institutionalization and centralizing. For example, we might need institutions to “credential” EA members or at least institutions, perhaps to allow democratic control, or at least clarity about membership. Alternatively, we could double-down on centralizing EA as a movement, putting even more power and responsibility on whoever ends up in charge—a more anti-democratic exercise.
However we manage to rebuild trust, it’s going to be expensive and painful as a transition—but if you want a large and growing high trust community, it can’t really be avoided. I don’t think that what Cremer and Kemp suggest is the right approach, nor are Cremer’s suggestions to MacAskill sufficient for a large and growing movement, but some are necessary, and if those measures are not taken, I think that the community should be announcing alternative structures sooner rather than later.
This isn’t just about trust, though. We’ve also seen allegations that EA as a community is too elitist, that it’s not a safe place for women, that it’s not diverse enough, and so on. These are all problems to address, but they are created by a single decision—to have an EA community at all. And the easy answer to many problems is to have a central authority, and build more bureaucracy. But is that a good idea?
The alternative is rethinking whether EA should exist as a community at all. And—please save the booing for the end—maybe it shouldn’t be one.
What would it mean for Effective Altruism to not be a global community?
Obviously, I’m not in charge of the global EA community. No one is, not even CEA, with a mission “dedicated to building and nurturing a global community.” Instead, individuals, and by extension, local and international communities are in charge of themselves. Clearly, nobody needs to listen to me. But nobody needs to listen to the central EA organizations either—and we don’t need to, and should not, accept the status quo.
I want to explore the claim that trying to have a single global community is, on net, unhelpful, and what the alternative looks like. I’m sure this will upset people, and I’m not saying the approach outlined below is necessarily the right one—but I do think it’s a pathway we, yes, as a community, should at least consider.
And I have a few ideas what a less community-centered EA might look like. To preface the ideas, however, “community” isn’t binary. And even at the most extreme, abandoning the idea of EA as a community would not mean banning hanging out with other people inspired by the idea of Effective Altruism, nor would it mean not staying in touch with current friends. It would also not mean canceling meet-ups or events. But it does have some specific implications, which I’ll try to explore.
Personal Implications
First, it means that “being an EA” would not be an identity.
This is probably epistemically healthy—the natural tendency to defend an in-group is far worse when attacks seem to include you, instead of attacking a philosophy you admire, or other individuals who like the same philosophy. I don’t feel attacked when someone says that some guy who reads books by Naomi Novik is a jerk[1], so why should I feel attacked when someone says a person who read and agreed with “Doing Good Better” or “The Precipice” is a jerk?
Not having EA as an identity would also mean that public relations stops being a thing that a larger community cares about—thankfully. Individual organizations would, of course, do their own PR, to the extent that it was useful. This seems like a great thing—concern about community PR isn’t a good thing for anyone to care about. We should certainly be concerned about ethics, and not doing bad things, not the way it looks.
Community Building Implications
Not having EA as a community obviously implies that “EA Community Building” as a cause area, especially a monolithic one, should end. But I think in retrospect, explicitly endorsing this as a cause to champion was a mistake. Popularizing ideas is great, bringing people with related interests is helpful, but there are some really unhealthy dynamics that were created, and fixing them seems harder than simply abandoning the idea, and starting over.
This would mean that we stopped doing “recruitment” on college campuses—which was always somewhat creepy. Individual EAs on campus would presumably still tell their friends about the awesome ideas, recommend books, or even host reading groups—but these would be aimed at convincing individuals to consider the ideas, not to “join EA.” And individuals in places with other EAs would certainly be welcome to tell friends and have meet-ups. But these wouldn’t be thought of as recruitment, and they certainly wouldn’t be subsidized centrally.
Wouldn’t this be bad?
CEA’s web site says “Effective altruism has been built around a friendly, motivated, interesting, and interested group of people from all over the world. Participating in the community has a number of advantages over going it alone.” Would it really be helpful to abandon this?
My answer, tentatively, is yes. Communities work well with small numbers of people, and less well as they grow. A single global community isn’t going to allow high trust without building, in effect, a church. I’m fairly convinced that Effective Altruism has grown past the point where a single community can be safe and high trust without hierarchy and lots of structure, and don’t know that there’s any way for that to be done effectively or acceptably.
Of course, individuals want and need communities—local communities, communities of shared interest, communities of faith, and so on. But putting the various parts of effective altruism into a single community, I would argue, was a mistake.
More Implications, and some Q&A
Would this mean no longer having community building grants, or supporting EA-community institutions?
First, I think that we should expect communities to be self-supporting, outside of donor dollars. Having work spaces and similar is great, but it’s not an impartially altruistic act to give yourself a community. It’s much too easy to view self-interested “community building” as actually altruistic work, and a firewall would be helpful.
Given that, I strongly think that most EAs would be better off giving their 10% to effective charities focused on the actual issues, and then paying dues or voluntarily contributing other, non-EA-designated funds for community building. That seems healthier for the community, and as a side-benefit, removes the current centralized “control” of EA communities, which are dependent on CEA or other groups.
There are plenty of people who are trying to give far more than 10% of their income. Communities are great—but paying for them is a personal expense, not altruism. And from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism. I would far rather have people giving “only” 10% to charity, and using their other money for paying dues towards hosting or helping to subsidize fun events for others in their community, or paying to work in a EA-aligned coworking space.
Similarly, college students and groups that wanted to run reading clubs about EA topics would be more than welcome to ask alumni or others to support them. There is a case to be made for individuals spending money to subsidize that—but things like community retreats should be paid for by attendees, or at most, should be subsidized with money that wasn’t promised to altruistic causes.
What about EA Global?
I think it would mean the end of “EA Global” as a generic conference. I have never attended, but I think having conferences where people can network is great—however, the way these are promoted and paid for is not. Davos is also a conference for important people to network—and lots of good things are done there, I am sure. We certainly should not be aiming for having an EA equivalent.
Instead, I would hope the generic EA global events are replaced by cause and career specific conferences, which would be more useful at the object level. I also think that having people pay to attend is good, instead of having their hotel rooms and flights paid for. If there are organizations or local groups that send people, they would be welcome to pay on behalf of the attendees, since they presumably get value from doing so. And if there are individuals who can’t otherwise afford it, or under-represented groups or locations, scholarships can be offered, paid for in part by the price paid by other attendees, or other conference sponsors. (Yes, conferences are usually sponsored, instead of paid for by donations.)
Wouldn’t this make it harder for funders to identify promising younger values aligned people early?
Yes, it would. But that actually seems good to me—we want people to demonstrate actual ability to have impact, not willingness to attend paid events at top colleges and network their way into what is already a pretty exclusive club.
Wouldn’t this tilt EA funders towards supporting more legibly high-status people at top schools?
It could, and that would be a failure in design of the community. And that seems bad to me, but it should be countered with more explicitly egalitarian efforts to find high-promise people who didn’t have parents who attended Harvard. But that isn’t what paid and exclusive conferences will address. Effective Altruism doesn’t have the best track record in this regard, and remedies are needed—but preserving the status quo isn’t a way to fix the problem.
Should CEA be defunded, or blamed for community failures?
No, obviously not. This post does explicitly attack some of their goals, and I hope this is taken in the spirit it is intended—as exploration and hopefully constructive criticism. They do tons of valuable work, which shouldn’t go away. If others agree that the current dynamics should change, I am still unsure how radically CEA should change direction. But if the direction I suggest is something that community members think is worth considering, CEA is obviously the key organization which would need to change.
Is this really a good idea?
It certainly isn’t something to immediately do in 2023, but I do think it’s a useful direction for EA to move towards. And directionally, I think it’s probably correct—though working out the exact direction and how it should be done is something that should be discussed.
And even if people dislike the idea, I hope it will prompt discussion of where current efforts have gone wrong. We should certainly be planning for the slightly-less-than-immediate term, and publicly thinking about the direction of the movement. We need to take seriously the question of what EA looks like in another decade or two, and I haven’t seen much public thinking about that question. (Perhaps longtermism has distracted people from thinking on the scale of single decades. Unfortunately.)
But rarely is a new direction something one person outlines, and everyone decides to pursue. (If so, the group is much too centrally controlled—something the founders of EA have said they don’t want.) And I do think that something like this is at least one useful path forward for EA.
If EA individuals and groups take more of this direction, I think it could be good, but details matter. At the same time, trajectory changes for large groups are slow, and should be deliberated about. So the details I’ve outlined have been trying to push the envelope, and prompt consideration of a different path we could take than the one we are on.
I promise I picked this as an example before Eliezer wrote his post. Really.