Hm. Okay, I buy that argument. But we can still ask whether the examples are representative enough to establish a concerning pattern. I don’t feel like they are. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement. CFAR feels more central, but the cultishness there was probably more about failure modes of the Bay area rationality community rather than anything to do with “EA culture.”
(I can think of other examples of EA cultishness and fanaticism tendencies, including from personal experiences, but I also feel like things turned out fine as EA professionalized itself, for many of these instances anyway, so they can even be interpreted positively as a reason to be less concerned now.)
I guess you could argue that FTX was such a blatant and outsized negative example that you don’t need a lot of other examples to establish the concerning pattern. That’s fair.
But then what is the precise update we should have made from FTX? Let’s compare three possible takeaways: (1) There’s nothing concerning, per se, with “EA culture,” apart from that EAs were insufficiently vigilant of bad actors. (2) EAs were insufficiently vigilant of bad actors and “EA culture” kind of exacerbates the damage that bad actors can do, even though “EA culture” is fine when there isn’t a cult-leader-type bad actor in the lead. (3) Due to “EA culture,” EA now contains way too many power-hungry schemers that lack integrity, and it’s a permeating problem rather than something you only find in peripheral groups when they have shady leadership.
I’m firmly in (2) but not in (3).
I’m not sure if you’re concerned that (3) is the case, or whether you think it’s “just” (2) but you think (2) is worrying enough by itself and hard to fix. Whereas I think (2) is among the biggest problems with EA, but I’m overall still optimistic about EA’s potential. (I mean, “optimistic” relative to the background of how doomed I think we are for other reasons.) (Though I’m also open to re-branding and reform efforts centered around splitting up into professionalized subcommunities and de-emphasizing the EA umbrella.)
Why I think (2) instead of (3): Mostly comes down to my experience and gut-level impressions from talking to staff at central EA orgs and reading their writings/opinions and so on. People seem genuinely nice, honest, and reasonable, even though they are strongly committed to a cause. FTX was not the sort of update that would overpower my first-order impressions here, which were based on many interactions/lots of EA experience. (FWIW, it would have been a big negative update for me if the recent OpenAI board drama had been instigated by some kind of backdoors plan about merging with Anthropic, but to my knowledge, these were completely unsubstantiated speculations. After learning more about what went down, they look even less plausible now than they looked at the start.)
. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement.
I will again remind people that Leverage at some point had approximately succeeded at a corporate takeover of CEA, placing both the CEO and their second-in-command in the organization. They really were not very peripheral to EA, they were just covert about it.
That’s indeed shocking, and now that you mention it, I also remember the Pareto fellowship Leverage takeover attempt. Maybe I’m too relaxed about this, but it feels to me like there’s no nearby possible world where this situation would have kept going? Pretty much everyone I talked to in EA always made remarks about how Leverage “is a cult” and the Leverage person became CEA’s CEO not because it was the result of a long CEO search process, but because the previous CEO left abruptly and they had few immediate staff-internal options. The CEO (edit: CEA!) board eventually intervened and installed Max Dalton, who was a good leader. Those events happened long ago and in my view they tentatively(?) suggest that the EA community had a good-enough self-correction mechanism so that schemers don’t tend to stay in central positions of power for long periods of time. I concede that we can count these as near misses and maybe even as evidence that there are (often successfully fended off) tensions with the EA culture and who it attracts, but I’m not yet on board with seeing these data points as “evidence for problems with EA as-it-is now” rather than “the sort of thing that happens in both EA and outside of EA as soon as you’re trying to have more influence.”
I think the self-correction mechanism was not very strong. I think if Tara (who was also strongly supportive of the Leverage faction, which is why she placed Larissa in charge) had stayed, I think it would have been the long-term equilibrium of the organization. The primary reason why the equilibrium collapsed is because Tara left to found Alameda.
A few months (maybe up to 9 months, but could be as little as 1 month, I don’t remember the timing) before Larissa had to leave CEA, a friend and I talked to a competent-seeming CEA staff member who was about to leave the org (or had recently left – I don’t remember details) because the org seemed like a mess and had bad leadership. I’m not sure if Leverage was mentioned there – I could imagine that it was, but I don’t remember details and my most salient memory is that I thought of it as “not good leadership for CEA.” My friend and I encouraged them to stay and speak up to try to change leadership, but the person had enough for the time being or had some other reason to leave (again, I don’t remember details). Anyway, this person left CEA at the time without a plan to voice their view that the org was in a bad state. I don’t know if they gave an exit interview or deliberately sought out trustees or if they talked to friends or whether they said nothing at all – I didn’t stay in touch. However, I do remember that my friend discussed if maybe we should at some point get back this former CEA staff person and encourage them again to find out if there are more former colleagues who are dissatisfied and if we could cause a wave of change at CEA. We were so far removed and had so few contacts to anyone who actually worked there that it would’ve been a bit silly for us to get involved in this. And I’m not saying we would’ve done it – it’s easy to talk about stuff like that, but then usually you don’t do anything. Still, I feel like this anecdote suggests that there are sometimes more people interested and invested in good community outcomes than one might think, and multiple pathways to beneficial leadership change (like, it’s very possible this former staff had nothing to do with the eventual chain of causes that led to leadership change, and that means that multiple groups of people were expressing worried sentiments about CEA at that time independently).
At one point somewhere between 2017-2018, someone influential in EA encouraged me to share more about specific stuff that happened in the EA orgs I worked because they were sometimes talking to other people who are “also interested in the health of EA orgs / health of the EA community.” (To avoid confusion, this was not the community health team.) This suggests that people somewhat systematically keep an eye on things and even if CEA were to get temporarily taken over by a Silicon Valley cultish community, probably someone would try to do something about it eventually. (Even if it’s just writing an EA forum post to create common knowledge that a specific is now taken over and no longer similar to what it was when it was founded. I mean, we see that posts did eventually get written about Leverage, for instance, and the main reasons it didn’t happen earlier are probably more because many people thought “oh, everyone knows already” and “like anywhere else, few people actually take the time to do community-useful small bits of work when you can just wait for someone else to do it.”)
By the way, this discussion (mostly my initial comment and what it’s in reaction to; not so much specifics about CEA history) reminded me of this comment about the difficulty of discussing issues around culture and desired norms. Seems like maybe we’d be better off discussing what each of us thinks would be best steps forward to improve EA culture or find a way to promote some kind of EA-relevant message (EA itself, the importance of AI alignment, etc.) and do movement building around that so it isn’t at risk of backfiring.
What probability would you assign to some weakened version of (3) being true? By some weakened version, I roughly mean taking the way out of “way too many,” and defining too many as ~ meaningfully above the base rate for people in positions of power/influence.
Agreed on it not being the highest of bars. I felt there was a big gap between your (2) and (3), so was aiming at ~ 2.4 to 2.5: neither peripheral nor widespread, with the understanding that the implied scale is somewhat exponential (because 3 is much worse than 2).
Yeah, I should’ve phrased (3) in a way that’s more likely to pass someone like habryka’s Ideological Turing Test.
Basically, I think if EAs were even just a little worse than typical people in positions of power (on the dimension of integrity), that would be awful news! We really want them to be significantly better.
I think EAs are markedly more likely to be fanatical naive consequentialists, which can be one form of “lacking in integrity” and is the main thing* I’d worry about in terms of me maybe being wrong. To combat that, you need to be above average in integrity on other dimensions.
*Ideology-induced fanaticism is my main concern, but I can think of other concerns as well. EA probably also attracts communal narcissists to some degree, or people who like the thought that they are special and can have lots of impact. Also, according to some studies, utilitarianism correlates with psychopathy at least in trolley problem examples. However, EA very much also (and more strongly?) attracts people who are unusually caring and compassionate. It also motivates people who don’t care about power to seek it, which is an effect with strong potential for making things better.
Hm. Okay, I buy that argument. But we can still ask whether the examples are representative enough to establish a concerning pattern. I don’t feel like they are. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement. CFAR feels more central, but the cultishness there was probably more about failure modes of the Bay area rationality community rather than anything to do with “EA culture.”
(I can think of other examples of EA cultishness and fanaticism tendencies, including from personal experiences, but I also feel like things turned out fine as EA professionalized itself, for many of these instances anyway, so they can even be interpreted positively as a reason to be less concerned now.)
I guess you could argue that FTX was such a blatant and outsized negative example that you don’t need a lot of other examples to establish the concerning pattern. That’s fair.
But then what is the precise update we should have made from FTX? Let’s compare three possible takeaways:
(1) There’s nothing concerning, per se, with “EA culture,” apart from that EAs were insufficiently vigilant of bad actors.
(2) EAs were insufficiently vigilant of bad actors and “EA culture” kind of exacerbates the damage that bad actors can do, even though “EA culture” is fine when there isn’t a cult-leader-type bad actor in the lead.
(3) Due to “EA culture,” EA now contains way too many power-hungry schemers that lack integrity, and it’s a permeating problem rather than something you only find in peripheral groups when they have shady leadership.
I’m firmly in (2) but not in (3).
I’m not sure if you’re concerned that (3) is the case, or whether you think it’s “just” (2) but you think (2) is worrying enough by itself and hard to fix. Whereas I think (2) is among the biggest problems with EA, but I’m overall still optimistic about EA’s potential. (I mean, “optimistic” relative to the background of how doomed I think we are for other reasons.) (Though I’m also open to re-branding and reform efforts centered around splitting up into professionalized subcommunities and de-emphasizing the EA umbrella.)
Why I think (2) instead of (3): Mostly comes down to my experience and gut-level impressions from talking to staff at central EA orgs and reading their writings/opinions and so on. People seem genuinely nice, honest, and reasonable, even though they are strongly committed to a cause. FTX was not the sort of update that would overpower my first-order impressions here, which were based on many interactions/lots of EA experience. (FWIW, it would have been a big negative update for me if the recent OpenAI board drama had been instigated by some kind of backdoors plan about merging with Anthropic, but to my knowledge, these were completely unsubstantiated speculations. After learning more about what went down, they look even less plausible now than they looked at the start.)
I will again remind people that Leverage at some point had approximately succeeded at a corporate takeover of CEA, placing both the CEO and their second-in-command in the organization. They really were not very peripheral to EA, they were just covert about it.
That’s indeed shocking, and now that you mention it, I also remember the Pareto fellowship Leverage takeover attempt. Maybe I’m too relaxed about this, but it feels to me like there’s no nearby possible world where this situation would have kept going? Pretty much everyone I talked to in EA always made remarks about how Leverage “is a cult” and the Leverage person became CEA’s CEO not because it was the result of a long CEO search process, but because the previous CEO left abruptly and they had few immediate staff-internal options. The CEO (edit: CEA!) board eventually intervened and installed Max Dalton, who was a good leader. Those events happened long ago and in my view they tentatively(?) suggest that the EA community had a good-enough self-correction mechanism so that schemers don’t tend to stay in central positions of power for long periods of time. I concede that we can count these as near misses and maybe even as evidence that there are (often successfully fended off) tensions with the EA culture and who it attracts, but I’m not yet on board with seeing these data points as “evidence for problems with EA as-it-is now” rather than “the sort of thing that happens in both EA and outside of EA as soon as you’re trying to have more influence.”
I think the self-correction mechanism was not very strong. I think if Tara (who was also strongly supportive of the Leverage faction, which is why she placed Larissa in charge) had stayed, I think it would have been the long-term equilibrium of the organization. The primary reason why the equilibrium collapsed is because Tara left to found Alameda.
Interesting; I didn’t remember this about Tara.
Two data points in the other direction:
A few months (maybe up to 9 months, but could be as little as 1 month, I don’t remember the timing) before Larissa had to leave CEA, a friend and I talked to a competent-seeming CEA staff member who was about to leave the org (or had recently left – I don’t remember details) because the org seemed like a mess and had bad leadership. I’m not sure if Leverage was mentioned there – I could imagine that it was, but I don’t remember details and my most salient memory is that I thought of it as “not good leadership for CEA.” My friend and I encouraged them to stay and speak up to try to change leadership, but the person had enough for the time being or had some other reason to leave (again, I don’t remember details). Anyway, this person left CEA at the time without a plan to voice their view that the org was in a bad state. I don’t know if they gave an exit interview or deliberately sought out trustees or if they talked to friends or whether they said nothing at all – I didn’t stay in touch. However, I do remember that my friend discussed if maybe we should at some point get back this former CEA staff person and encourage them again to find out if there are more former colleagues who are dissatisfied and if we could cause a wave of change at CEA. We were so far removed and had so few contacts to anyone who actually worked there that it would’ve been a bit silly for us to get involved in this. And I’m not saying we would’ve done it – it’s easy to talk about stuff like that, but then usually you don’t do anything. Still, I feel like this anecdote suggests that there are sometimes more people interested and invested in good community outcomes than one might think, and multiple pathways to beneficial leadership change (like, it’s very possible this former staff had nothing to do with the eventual chain of causes that led to leadership change, and that means that multiple groups of people were expressing worried sentiments about CEA at that time independently).
At one point somewhere between 2017-2018, someone influential in EA encouraged me to share more about specific stuff that happened in the EA orgs I worked because they were sometimes talking to other people who are “also interested in the health of EA orgs / health of the EA community.” (To avoid confusion, this was not the community health team.) This suggests that people somewhat systematically keep an eye on things and even if CEA were to get temporarily taken over by a Silicon Valley cultish community, probably someone would try to do something about it eventually. (Even if it’s just writing an EA forum post to create common knowledge that a specific is now taken over and no longer similar to what it was when it was founded. I mean, we see that posts did eventually get written about Leverage, for instance, and the main reasons it didn’t happen earlier are probably more because many people thought “oh, everyone knows already” and “like anywhere else, few people actually take the time to do community-useful small bits of work when you can just wait for someone else to do it.”)
By the way, this discussion (mostly my initial comment and what it’s in reaction to; not so much specifics about CEA history) reminded me of this comment about the difficulty of discussing issues around culture and desired norms. Seems like maybe we’d be better off discussing what each of us thinks would be best steps forward to improve EA culture or find a way to promote some kind of EA-relevant message (EA itself, the importance of AI alignment, etc.) and do movement building around that so it isn’t at risk of backfiring.
“The CEA board”, right?
What probability would you assign to some weakened version of (3) being true? By some weakened version, I roughly mean taking the way out of “way too many,” and defining too many as ~ meaningfully above the base rate for people in positions of power/influence.
10%.
Worth noting that it’s not the highest of bars.
Agreed on it not being the highest of bars. I felt there was a big gap between your (2) and (3), so was aiming at ~ 2.4 to 2.5: neither peripheral nor widespread, with the understanding that the implied scale is somewhat exponential (because 3 is much worse than 2).
Yeah, I should’ve phrased (3) in a way that’s more likely to pass someone like habryka’s Ideological Turing Test.
Basically, I think if EAs were even just a little worse than typical people in positions of power (on the dimension of integrity), that would be awful news! We really want them to be significantly better.
I think EAs are markedly more likely to be fanatical naive consequentialists, which can be one form of “lacking in integrity” and is the main thing* I’d worry about in terms of me maybe being wrong. To combat that, you need to be above average in integrity on other dimensions.
*Ideology-induced fanaticism is my main concern, but I can think of other concerns as well. EA probably also attracts communal narcissists to some degree, or people who like the thought that they are special and can have lots of impact. Also, according to some studies, utilitarianism correlates with psychopathy at least in trolley problem examples. However, EA very much also (and more strongly?) attracts people who are unusually caring and compassionate. It also motivates people who don’t care about power to seek it, which is an effect with strong potential for making things better.