I am deactivating my account.[1] My unfortunate best guess is that at this point there is little point and at least a bit of harm caused by me commenting more on the EA Forum. I am sad to leave behind so much that I have helped build and create, and even sadder to see my own actions indirectly contribute to much harm.
I think many people on the forum are great, and at many points in time this forum was one of the best places for thinking and talking and learning about many of the world’s most important topics. Particular shoutouts to @Jason, @Linch, @Larks, @Neel Nanda and @Lizka for overall being great commenters. It is rare that I had conversations with any of you that I did not substantially benefit from.
Also great thanks to @JP Addison🔸 for being the steward of the forum through many difficult years. It’s been good working with you. I hope @Sarah Cheng can turn the ship around as she takes over responsibilities. I still encourage you to spin out of CEA. I think you could fundraise. Of course the forum is responsible for more than 3% of CEA’s impact by I think most people’s lights, and all you need is 3% of CEA’s budget to make a great team.
I have many reasons for leaving, as I have been trying to put more distance between me and the EA community. I won’t go into all of them, but I do encourage people to read my comments over the last 2 years to get a sense of them, I think there is some good writing in there.
The reason I think I would be most amiss to not mention here is the increasing sense of disconnect I have been feeling between what once was a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo that wants to do as much good as they can, and the present EA community whose identity, branding and structure is largely determined by a closed-off set of leaders with little history of intellectual contributions, and with little connection to what attracted me to this philosophy and community in the first place. The community feels very leaderless and headless these days, and in the future I only see candidates for leadership that are worse than none. Almost everyone who has historically been involved in a leadership position has stepped back and abdicated that role.
I no longer really see a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community, especially in domains like AI Safety Research, AGI Policy, internal community governance, or more broadly steering humanity’s development of technology in positive directions. I think while shallow criticism often gets valorized, the actual life of someone who tries to make things better by trying to reward and fund good work and hold people accountable, is one of misery and adversarial relationship, accompanied by censure, gaslighting and overall a deep sense of loneliness.
To be clear, there has always been an undercurrent of this in the community. When I was at CEA back in 2015 we frequently and routinely deployed highly adversarial strategies to ensure we maintained more control over what people understood what EA meant, and who would get so shape it, and the internet weirdos were often a central target of our efforts to make others less influential. But it is more true now. The EA Forum was not run by CEA at the time, and maybe that was good, and funding was not so extremely centralized in a single large foundation, and that foundation still had a lot more freedom and integrity back then.
It’s been a good run. Thanks to many of you, and ill wishes to many others. When the future is safe, and my time is less sparse, I hope we can take the time and figure out who was right in things. I certainly don’t speak with confidence on many things I have disagreed with others on, only with conviction to try to do good even in a world as confusing and uncertain as this and to not let the uncertainty prevent me from saying what I believe. It sure seems like we all made a difference, just unclear what sign.
Habryka, just wanted to say thank you for your contributions to the Forum. Overall I’ve appreciated them a lot! I’m happy that we’ll continue to collaborate behind the scenes, at least because I think there’s still plenty I can learn from you. I think we agree that running the Forum is a big responsibility, so I hope you feel free to share your honest thoughts with me.
I do think we disagree on some points. For example, you seem significantly more negative about CEA than I am (I’m probably biased because I work there, though I certainly don’t think it’s perfect). I also think that the discussions on the Forum do affect real change, though of course it’s hard to know how much with any real confidence. I know of at least two specific cases when a person in a position with some power (in the real world, not in the EA community) has taken action based on something they read on the Forum, and my impression is that many people who have power within the EA community continue to read the Forum even if they don’t make time to write here. Of course, it’s true that they could ignore serious criticism is they wanted to, but my sense is that people actually quite often feel unable to ignore criticism. So I guess I am more optimistic that the Forum, as an extremely public community space, can continue to provide value by playing this role.
By the way, I personally care a lot about EA reaching its future potential for doing the most good. Habryka, I don’t know the details of what you went through when trying to make things better, but I’m sorry to hear that it felt so bad. I’ll just say that, if anyone feels like they are trying to make things better in EA and are unable to do so, you’re welcome to reach out to me directly (you can message me via the Forum). I will at least hear you out and give you my thoughts, and perhaps you can convince me to help.
Of course, it’s true that they could ignore serious criticism is they wanted to, but my sense is that people actually quite often feel unable to ignore criticism.
As someone sympathetic to many of Habryka’s positions, while also disagreeing with many of Habryka’s positions, my immediate reaction to this was “well that seems like a bad thing”, c.f.
shallow criticism often gets valorized
I’d feel differently if you had said “people feel obliged to take criticism seriously if it points at a real problem” or something like that, but I agree with you that the mechanism is more like “people are unable to ignore criticism irrespective of its quality” (the popularity of the criticism matters, but sadly that is only weakly correlated with quality).
I appreciate you sharing your views on this! I agree that as a whole, this is suboptimal.
I don’t currently feel confident enough about the take that “shallow criticism often gets valorized” to prioritize tackling it, though I am spending some time thinking about moderation and managing user-generated content and I expect that the mod team (including myself) will discuss how we’d like to handle critical comments, so this will probably come up in our discussions.
I’m kind of worried that there’s not necessarily an objective truth to how shallow/low-quality any particular criticism is, and I personally would prefer to err on the side of allowing more criticism. So it’s possible that not much changes in the public discourse, and any interventions we do may need to be behind the scenes (such as our team spending more time talking with people who get criticized).
It feels appropriate that this post has a lot of hearts and simultaneously disagree reacts. We will miss you, even (perhaps especially) those of us who often disagreed with you.
I would love to reflect with you on the other side of the singularity. If we make it through alive, I think there’s a decent chance that it will be in part thanks to your work.
fyi for anyone like me who doesn’t have lots of the backstory here and doesn’t want to read through Habryka’s extensive corpus of EAF writings, here is Claude 3.7 Sonnet’s summary based on the first page of comments Habryka links to.
Based on Habryka’s posts, I can provide a summary of his key disagreements with EA leadership and forum administrators that ultimately led to his decision to leave the community.
Key Disagreements
Leadership and Accountability: Habryka repeatedly expresses concern about what he sees as a “leaderless” EA community. He believes the community has shifted from being driven by independent intellectual contributors to being determined by “a closed-off set of leaders with little history of intellectual contributions.” He argues that almost everyone who was historically in leadership positions has stepped back and abdicated their roles.
Institutional Integrity: He criticizes EA organizations, particularly CEA (Centre for Effective Altruism), for prioritizing growth, prestige, and public image over intellectual integrity. In his posts, he describes personal experiences at CEA where they “deployed highly adversarial strategies” to maintain control over EA’s public image and meaning.
FTX Situation: Habryka was particularly critical of how EA leadership handled Sam Bankman-Fried (SBF) and FTX. He claims to have warned people about SBF’s reputation for dishonesty, but these warnings were not heeded. He criticizes Will MacAskill and others for their continued endorsement of SBF despite red flags, and was frustrated by the lack of transparency and open discussion after FTX’s collapse.
Risk-Aversion and PR Focus: He repeatedly criticizes what he perceives as excessive risk-aversion and PR-mindedness among EA organizations. He argues this approach prevents honest discussion of important issues and contributes to a culture of conformity.
Funding Centralization: Habryka expresses concern about EA funding being increasingly centralized through a single large foundation (likely referring to Open Philanthropy), arguing this concentration of resources creates unhealthy power dynamics.
Community Culture: He criticizes the shift in EA culture away from what he describes as “a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo” toward something more institutional and conformist.
Failure to Create Change: Habryka states that he no longer sees “a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community,” particularly in domains like AI safety research and community governance.
His departure post suggests a deep disillusionment with the direction of the EA community, expressing that while many of the principles of EA remain important, he believes “EA at large is causing large harm for the world” with “no leadership or accountability in-place to fix it.” He recommends others avoid posting on the EA Forum as well, directing them to alternatives like LessWrong.
Hmm, I’m not a fan of this Claude summary (though I appreciate your trying). Below, I’ve made a (play)list of Habryka’s greatest hits,[1] ordered by theme,[2][3] which might be another way for readers to get up to speed on his main points:
‘Greatest hits’ may be a bit misleading. I’m only including comments from the post-FTX era, and then, only comments that touch on the core themes. (This is one example of a great comment I haven’t included.)
My ordering is quite different to the karma ordering given on the GreaterWrong page Habryka links to. I think mine does a better job of concisely covering Habryka’s beliefs on the key topics. But I’d be happy to take my list down if @Habryka disagrees (just DM me).
Here’s another summary. I used Gemini 2.0 Flash (via the API), and this prompt:
The following is a series of comments by Habryka, in which he makes a bunch of criticisms of the effective altruism (EA) movement. Please look at these comments and provide a summary of Habryka’s main criticisms.
Lack of leadership and accountability: He believes EA leadership is causing harm and lacks mechanisms for correcting course.
Emphasis on PR and narrative control: He condemns EA organizations’ risk aversion, guardedness, and attempts to control the narrative around FTX, prioritizing public image over transparency.
Inadequate community health: He laments conformity pressures, fears of reprisal for dissent, and insufficient efforts to cultivate a culture of open disagreement.
Entanglement with FTX: He faults EA leadership, particularly Will MacAskill, for endorsing Sam Bankman-Fried and entangling the movement with FTX despite warnings about SBF’s character.
Hero worship and lack of respect for intellectual leaders: He criticizes the hero worship of MacAskill, contrasting it with MacAskill’s perceived lack of engagement with other intellectual leaders in the community. He sees this as part of a pattern of MacAskill prioritizing popularity and prestige over community health and epistemic integrity.
Misleading communications and lack of transparency: He criticizes CEA for making inaccurate and misleading statements, for omitting crucial context in communications, and for concealing information about funding decisions.
Scaling too quickly and attracting grifters: He worries that EA’s rapid growth and increased funding attract deceptive actors and create perverse incentives.
Overreliance on potentially compromised institutions: He expresses concerns about EA’s deep ties to institutions like Oxford University, which may stifle intellectual exploration and operational capacity.
Ignoring internal warnings about FTX: He reveals that he and others warned EA leadership about Sam Bankman-Fried’s reputation for dishonesty, but those warnings went unheeded. He suggests he personally observed potentially illegal activities by SBF but chose not to share this information more widely.
Flawed due diligence and poor judgment in grantmaking: He feels EA leadership’s due diligence on SBF was inadequate and that they made poor judgments in providing him with substantial resources. He extends this criticism to grantmaking practices more generally.
Unfair distribution of resources: He argues that the current distribution of funds within EA doesn’t adequately compensate those doing object-level work and undervalues their contributions relative to donors. He argues for a system that recognizes the implicit tradeoff many have made in pursuing lower-paying EA-aligned careers.
Centralized media policy and negative experiences with journalists: While supporting a less centralized media policy, he also cautions against interacting with journalists, as they frequently misrepresent interviewees and create negative experiences.
This is a pity. My impression is that you added a lot of value, and the fact you’re leaving is a signal we’ll have fewer people like you involved. It’s probably a trade-off, but I don’t know if it’s the right trade-off. Thanks for your contribution!
So long and thanks for all the fish.
I am deactivating my account.[1] My unfortunate best guess is that at this point there is little point and at least a bit of harm caused by me commenting more on the EA Forum. I am sad to leave behind so much that I have helped build and create, and even sadder to see my own actions indirectly contribute to much harm.
I think many people on the forum are great, and at many points in time this forum was one of the best places for thinking and talking and learning about many of the world’s most important topics. Particular shoutouts to @Jason, @Linch, @Larks, @Neel Nanda and @Lizka for overall being great commenters. It is rare that I had conversations with any of you that I did not substantially benefit from.
Also great thanks to @JP Addison🔸 for being the steward of the forum through many difficult years. It’s been good working with you. I hope @Sarah Cheng can turn the ship around as she takes over responsibilities. I still encourage you to spin out of CEA. I think you could fundraise. Of course the forum is responsible for more than 3% of CEA’s impact by I think most people’s lights, and all you need is 3% of CEA’s budget to make a great team.
I have many reasons for leaving, as I have been trying to put more distance between me and the EA community. I won’t go into all of them, but I do encourage people to read my comments over the last 2 years to get a sense of them, I think there is some good writing in there.
The reason I think I would be most amiss to not mention here is the increasing sense of disconnect I have been feeling between what once was a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo that wants to do as much good as they can, and the present EA community whose identity, branding and structure is largely determined by a closed-off set of leaders with little history of intellectual contributions, and with little connection to what attracted me to this philosophy and community in the first place. The community feels very leaderless and headless these days, and in the future I only see candidates for leadership that are worse than none. Almost everyone who has historically been involved in a leadership position has stepped back and abdicated that role.
I no longer really see a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community, especially in domains like AI Safety Research, AGI Policy, internal community governance, or more broadly steering humanity’s development of technology in positive directions. I think while shallow criticism often gets valorized, the actual life of someone who tries to make things better by trying to reward and fund good work and hold people accountable, is one of misery and adversarial relationship, accompanied by censure, gaslighting and overall a deep sense of loneliness.
To be clear, there has always been an undercurrent of this in the community. When I was at CEA back in 2015 we frequently and routinely deployed highly adversarial strategies to ensure we maintained more control over what people understood what EA meant, and who would get so shape it, and the internet weirdos were often a central target of our efforts to make others less influential. But it is more true now. The EA Forum was not run by CEA at the time, and maybe that was good, and funding was not so extremely centralized in a single large foundation, and that foundation still had a lot more freedom and integrity back then.
It’s been a good run. Thanks to many of you, and ill wishes to many others. When the future is safe, and my time is less sparse, I hope we can take the time and figure out who was right in things. I certainly don’t speak with confidence on many things I have disagreed with others on, only with conviction to try to do good even in a world as confusing and uncertain as this and to not let the uncertainty prevent me from saying what I believe. It sure seems like we all made a difference, just unclear what sign.
I won’t use the “deactivate account” future which would delete my profile. I am just changing my user name and bio to indicate I am no longer active.
Habryka, just wanted to say thank you for your contributions to the Forum. Overall I’ve appreciated them a lot! I’m happy that we’ll continue to collaborate behind the scenes, at least because I think there’s still plenty I can learn from you. I think we agree that running the Forum is a big responsibility, so I hope you feel free to share your honest thoughts with me.
I do think we disagree on some points. For example, you seem significantly more negative about CEA than I am (I’m probably biased because I work there, though I certainly don’t think it’s perfect). I also think that the discussions on the Forum do affect real change, though of course it’s hard to know how much with any real confidence. I know of at least two specific cases when a person in a position with some power (in the real world, not in the EA community) has taken action based on something they read on the Forum, and my impression is that many people who have power within the EA community continue to read the Forum even if they don’t make time to write here. Of course, it’s true that they could ignore serious criticism is they wanted to, but my sense is that people actually quite often feel unable to ignore criticism. So I guess I am more optimistic that the Forum, as an extremely public community space, can continue to provide value by playing this role.
By the way, I personally care a lot about EA reaching its future potential for doing the most good. Habryka, I don’t know the details of what you went through when trying to make things better, but I’m sorry to hear that it felt so bad. I’ll just say that, if anyone feels like they are trying to make things better in EA and are unable to do so, you’re welcome to reach out to me directly (you can message me via the Forum). I will at least hear you out and give you my thoughts, and perhaps you can convince me to help.
As someone sympathetic to many of Habryka’s positions, while also disagreeing with many of Habryka’s positions, my immediate reaction to this was “well that seems like a bad thing”, c.f.
I’d feel differently if you had said “people feel obliged to take criticism seriously if it points at a real problem” or something like that, but I agree with you that the mechanism is more like “people are unable to ignore criticism irrespective of its quality” (the popularity of the criticism matters, but sadly that is only weakly correlated with quality).
I appreciate you sharing your views on this! I agree that as a whole, this is suboptimal.
I don’t currently feel confident enough about the take that “shallow criticism often gets valorized” to prioritize tackling it, though I am spending some time thinking about moderation and managing user-generated content and I expect that the mod team (including myself) will discuss how we’d like to handle critical comments, so this will probably come up in our discussions.
I’m kind of worried that there’s not necessarily an objective truth to how shallow/low-quality any particular criticism is, and I personally would prefer to err on the side of allowing more criticism. So it’s possible that not much changes in the public discourse, and any interventions we do may need to be behind the scenes (such as our team spending more time talking with people who get criticized).
It feels appropriate that this post has a lot of hearts and simultaneously disagree reacts. We will miss you, even (perhaps especially) those of us who often disagreed with you.
I would love to reflect with you on the other side of the singularity. If we make it through alive, I think there’s a decent chance that it will be in part thanks to your work.
fyi for anyone like me who doesn’t have lots of the backstory here and doesn’t want to read through Habryka’s extensive corpus of EAF writings, here is Claude 3.7 Sonnet’s summary based on the first page of comments Habryka links to.
Hmm, I’m not a fan of this Claude summary (though I appreciate your trying). Below, I’ve made a (play)list of Habryka’s greatest hits,[1] ordered by theme,[2][3] which might be another way for readers to get up to speed on his main points:
Leadership
The great OP gridlock
On behalf of the people
You are not blind, Zachary[4]
Reputation[5]
Convoluted compensation
Orgs don’t have beliefs
Not behind closed doors
Funding
Diverse disappointment
Losing our (digital) heart
Blacklisted conclusions
Impact
Many right, some wrong
Inverting the sign
A bad aggregation
‘Greatest hits’ may be a bit misleading. I’m only including comments from the post-FTX era, and then, only comments that touch on the core themes. (This is one example of a great comment I haven’t included.)
although the themes overlap a fair amount
My ordering is quite different to the karma ordering given on the GreaterWrong page Habryka links to. I think mine does a better job of concisely covering Habryka’s beliefs on the key topics. But I’d be happy to take my list down if @Habryka disagrees (just DM me).
For context, Zachary is CEA’s CEO.
FWIW it looks like claude is only summarising this quick take (all the quotes are from it)
Here’s another summary. I used Gemini 2.0 Flash (via the API), and this prompt:
Lack of leadership and accountability: He believes EA leadership is causing harm and lacks mechanisms for correcting course.
Emphasis on PR and narrative control: He condemns EA organizations’ risk aversion, guardedness, and attempts to control the narrative around FTX, prioritizing public image over transparency.
Inadequate community health: He laments conformity pressures, fears of reprisal for dissent, and insufficient efforts to cultivate a culture of open disagreement.
Entanglement with FTX: He faults EA leadership, particularly Will MacAskill, for endorsing Sam Bankman-Fried and entangling the movement with FTX despite warnings about SBF’s character.
Hero worship and lack of respect for intellectual leaders: He criticizes the hero worship of MacAskill, contrasting it with MacAskill’s perceived lack of engagement with other intellectual leaders in the community. He sees this as part of a pattern of MacAskill prioritizing popularity and prestige over community health and epistemic integrity.
Misleading communications and lack of transparency: He criticizes CEA for making inaccurate and misleading statements, for omitting crucial context in communications, and for concealing information about funding decisions.
Scaling too quickly and attracting grifters: He worries that EA’s rapid growth and increased funding attract deceptive actors and create perverse incentives.
Overreliance on potentially compromised institutions: He expresses concerns about EA’s deep ties to institutions like Oxford University, which may stifle intellectual exploration and operational capacity.
Ignoring internal warnings about FTX: He reveals that he and others warned EA leadership about Sam Bankman-Fried’s reputation for dishonesty, but those warnings went unheeded. He suggests he personally observed potentially illegal activities by SBF but chose not to share this information more widely.
Flawed due diligence and poor judgment in grantmaking: He feels EA leadership’s due diligence on SBF was inadequate and that they made poor judgments in providing him with substantial resources. He extends this criticism to grantmaking practices more generally.
Unfair distribution of resources: He argues that the current distribution of funds within EA doesn’t adequately compensate those doing object-level work and undervalues their contributions relative to donors. He argues for a system that recognizes the implicit tradeoff many have made in pursuing lower-paying EA-aligned careers.
Centralized media policy and negative experiences with journalists: While supporting a less centralized media policy, he also cautions against interacting with journalists, as they frequently misrepresent interviewees and create negative experiences.
The original post is only 700 words, and this is like half that length. Can you not give people the dignity of reading their actual parting words?
Pablo and I were trying to summarise the top page of Habryka’s comments that he linked to (~13k words) not this departure post itself.
Hmm true, I gave it the whole Greater Wrong page of comments, maybe it just didn’t quote from those for some reason.
This is a pity. My impression is that you added a lot of value, and the fact you’re leaving is a signal we’ll have fewer people like you involved. It’s probably a trade-off, but I don’t know if it’s the right trade-off. Thanks for your contribution!
Thanks for all your efforts, Habryka.