Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Say what you mean, as plainly as possible.
Try to use words and expressions that a general audience would understand.
Be more casual and less formal if you think that means more people are more likely to understand what you’re trying to say.
To illustrate abstract concepts, give examples.
Where possible, try to let go of minor details that aren’t important to the main point someone is trying to make. Everyone slightly misspeaks (or mis… writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you’re engaging in nitpicking.
When you don’t understand what someone is trying to say, just say that. (And be polite.)
Don’t engage in passive-aggressiveness or code insults in jargon or formal language. If someone’s behaviour is annoying you, tell them it’s annoying you. (If you don’t want to do that, then you probably shouldn’t try to communicate the same idea in a coded or passive-aggressive way, either.)
If you’re using an uncommon word or using a word that also has a more common definition in an unusual way (such as “truthseeking”), please define that word as you’re using it and — if applicable — distinguish it from the more common way the word is used.
Err on the side of spelling out acronyms, abbreviations, and initialisms. You don’t have to spell out “AI” as “artificial intelligence”, but an obscure term like “full automation of labour” or “FAOL” that was made up for one paper should definitely be spelled out.
When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn’t already in the know can more easily understand who or what you’re talking about. For example, instead of just saying “MacAskill” or “Will”, say “Will MacAskill” — just using the full name once per post or comment is plenty. You could also mention someone’s profession (e.g. “philosopher”, “economist”) or the organization they’re affiliated with (e.g. “Oxford University”, “Anthropic”). For organizations, when it isn’t already obvious in context, it might be helpful to give a brief description. Rather than saying, “I donated to New Harvest and still feel like this was a good choice”, you could say “I donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choice”. The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
When in doubt, say it shorter.[1] In my experience, when I take something I’ve written that’s long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
Kindness is fundamental. Maya Angelou said, “At the end of the day people won’t remember what you said or did, they will remember how you made them feel.” Being kind is usually more important than whatever argument you’re having.
This advice comes from the psychologist Harriet Lerner’s wonderful book Why Won’t You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
I used to feel so strongly about effective altruism. But my heart isn’t in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven’t been able to sustain a vegan diet for more than a short time. And so on.
But there isn’t a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI’s takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or “colonized” effective altruism culture is such a bummer. I know there’s been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or “rationalism” and the more knowledge and experience of it I’ve gained, the more I’ve accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stories about sexual harassment are so disgusting. They’re really, really bad and crazy. And it’s so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.
-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction I’ve seen to instances of racism has been crazymaking. It’s similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. It’s sickening.
-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so… easily fooled.
-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. It’s not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly it’s just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think it’s rare that writing with a more normal diction wouldn’t enhance people’s understanding of what you’re trying to say, even if you’re only trying to communicate with people who are steeped in the effective altruist niche. I don’t think the effective altruist sublanguage is serving good thinking or good communication.
-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is “homegrown” it takes on the force of a scientific idea that’s been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.
-An attitude I could call “EA exceptionalism”, where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isn’t doing the most good, then it isn’t EA. The obvious implication of this argument is that what’s called “EA” might not be true EA, and maybe true EA looks nothing like “EA”. So, this argument is not a defense of the self-identified “EA” movement or community or self-identified “EA” thought.)
-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.
-Too much of the discourse seems to revolve around how to control people’s behaviours or beliefs. It’s a bit too House of Cards. I recently read about the psychologist Kurt Lewin’s study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the women’s was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didn’t have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, it’s also more moral and shows more epistemic humility (you might be wrong about what you believe and that’s one reason not to try to manipulate people into believing it).
-A few more things but this list is already long enough.
Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. I’m not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I don’t know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.
I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today — which is mostly about AGI and has mostly been taken over by the rationalist subculture — that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.
I’d distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they’ve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil’s website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn’t much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors don’t make the forum all about AI—then we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to “balance” the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don’t want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldn’t want the movement to discourage anyone who shares its principles (like “let’s use our time and resources to help others the most”), but disagrees with how it’s being put into practice, from actively participating.
I don’t think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Good point, I guess my lasting impression wasn’t entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn’t feels discouraged from actively participating in EA.
Since my days of reading William Easterly’s Aid Watch blog back in the late 2000s and early 2010s, I’ve always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think it’s worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on people’s intuitions, and people’s intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that aren’t.
I agree with most of you say here, indeed all things being equal a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from anywhere else. The problem is your caveats - things are almost never equal...
1) Education systems just aren’t nearly as good in lower income countries. This means that that education is sadly barely ever equal. Even between low income countries—a Kenyan once joked with me that “a Ugandan degree holder is like a Kenyan high school leaver”. If you look at the top echelon of NGO/Charity leaders from low-income who’s charities have grown and scaled big, most have been at least partially educated in richer countries
2) Ability to network is sadly usually so so much higher if you’re from a higher income country. Social capital is real and insanely important. If you look at the very biggest NGOs, most of them are founded not just by Westerners, but by IVY LEAGUE OR OXBRIDGE EDUCATED WESTERNERS. Paul Farmer (Partners in Health) from Harvard, Raj Panjabi (LastMile Health) from Harvard. Paul Niehaus (GiveDirectly) from Harvard. Rob Mathers (AMF) Harvard AND Cambridge. With those connections you can turn a good idea into growth so much faster even compared to super privileged people like me from New Zealand, let alone people with amazing ideas and organisations in low income countries who just don’t have access to that kind of social capital.
3) The pressures on people from low-income countries are so high to secure their futures, that their own financial security will often come first and the vast majority won’t stay the course with their charity, but will leave when they get an opportunity to further their career. And fair enough too! I’ve seen a number of of incredibly talented founders here in Northern Uganda drop their charity for a high paying USAID job (that ended poorly...), or an overseas study scholarship, or a solid government job. Here’s a telling quote from this great take here by @WillieG
“Roughly a decade ago, I spent a year in a developing country working on a project to promote human rights. We had a rotating team of about a dozen (mostly) brilliant local employees, all college-educated, working alongside us. We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded. I got nostalgic and looked up my old colleagues recently. Every single one is living in the West now. A few are still somewhat involved in human rights, but most are notably under-employed (a lawyer washing dishes in a restaurant in Virginia, for example”
I think (somewhat sadly) a good combination can be for co-founders or co-leaders to be one person from a high-income country with more funding/research connections, and one local person who like you say will be far more effective at understanding the context and leading in locally-appropriate ways. This synergy can cover important bases, and you’ll see a huge number of charities (including mine) founded along these lines.
These realities makes me uncomfortable though, and I wish it weren’t so. As @Jeff Kaufman 🔸 said “I can’t reject my privilege, I can’t give it back” so I try and use my privilege as best as possible to help lift up the poorest people. The organisation OneDay Health I co-founded has me as the only employed foreigner, and 65 other local staff.
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again, you have to face the wretched masses of humanity and say “me too, me too, me too” (and realize you are one of them).
I am a total believer in the second philosophy and a hater of the first philosophy. (Not because it’s easy, but because it’s right!) To the extent I care about effective altruism, it’s because of the second philosophy: expand the moral circle, value all lives equally, extend beyond national borders, consider non-human creatures.
When I see people in effective altruism evince the first philosophy, to me, this is a profane betrayal of the whole point of the movement.
One of the reasons (among several other important reasons) that rationalists piss me off so much is their whole worldview and subculture is based on the first philosophy. Even the word “rationalist” is about being superior to other people. If the rationalist community has one founder or leader, it would be Eliezer Yudkowsky. The way Eliezer Yudkowsky talks to and about other people, even people who are actively trying to help him or to understand him, is so hateful and so mean. He exhales contempt. And it isn’t just Eliezer — you can go on LessWrong and read horrifying accounts of how some prominent people in the community have treated their employee or their romantic partner, with the stated justification that they are separate from and superior to others. Obviously there’s a huge problem with racism, sexism, and anti-LGBT prejudice too, which are other ways of feeling separate and above.
There is no happiness to be found at the top of a hierarchy. Look at the people who think in the most hierarchical terms, who have climbed to the tops of the hierarchies they value. Are they happy? No. They’re miserable. This is a game you can’t win. It’s a con. It’s a lie.
In the beautiful words of the Franciscan friar Richard Rohr, “The great and merciful surprise is that we come to God not by doing it right but by doing it wrong!”
(Richard Rohr’s episode of You Made It Weird with Pete Holmes is wonderful if you want to hear more.)
Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried’s behaviour years before the FTX collapse?
Here is the situation we’re in with regard to near-term prospects for artificial general intelligence (AGI). This is why I’m extremely skeptical of predictions that we’ll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can’t score above 5% on the ARC-AGI-2 benchmark, they can’t automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying something that happened in 2025 caused something that happened in 2024, while listing the dates of the events. They struggle with things that are easy for humans, like playing hangman.
-The capabilities of LLMs have been improving slowly. There is only a modest overall difference between GPT-3.5 (the original ChatGPT model), which came out in November 2022, and newer models like GPT-4o, o4-mini, and Gemini 2.5 Pro.
-There are signs that there are diminishing returns to scaling for LLMs. Increasing the size of models and the size of the pre-training data doesn’t seem to be producing the desired results anymore. LLM companies have turned to scaling test-time compute to eke out more performance gains, but how far can that go?
-There may be certain limits to scaling that are hard or impossible to overcome. For example once you’ve trained a model on all the text that exists in the world, you can’t keep training on exponentially[3] more text every year. Current LLMs might be fairly close to running out of exponentially[4] more text to train on, if they haven’t run out already.[5]
-A survey of 475 AI experts found that 76% think it’s “unlikely” or “very unlikely” that “scaling up current AI approaches” will lead to AGI. So, we should be skeptical of the idea that just scaling up LLMs will lead to AGI, even if LLM companies manage to keep scaling them up and improving their performance by doing so.
-Few people have any concrete plan for how to build AGI (beyond just scaling up LLMs). The few people who do have a concrete plan disagree fundamentally on what the plan should be. All of these plans are in the early-stage research phase. (I listed some examples in a comment here.)
-Some of the scenarios people are imagining where we get to AGI in the near future involve strange, exotic, hypothetical process wherein a sub-AGI AI system can automate the R&D that gets us from a sub-AGI AI system to AGI. This requires two things to be true: 1) that doing the R&D needed to create AGI is not a task that would require AGI or human-level AI and 2) that, in the near term, AI systems somehow advance to the point where they’re able to do meaningful R&D autonomously. Given that I can’t even coax o4-mini or Gemini 2.5 Pro into playing hangman properly, and given the slow improvement of LLMs and the signs of diminishing returns to scaling I mentioned, I don’t see how (2) could be true. The arguments for (1) feel very speculative and handwavy.
Given all this, I genuinely can’t understand why some people think there’s a high chance of AGI within 5 years. I guess the answer is they probably disagree on most or all of these individual points.
Maybe they think the conventional written question and answer benchmarks for LLMs are fair apples-to-apples comparisons of machine intelligence and human intelligence. Maybe they are really impressed with the last 2 to 2.5 years of progress in LLMs. Making they are confident no limits to scaling or diminishing returns to scaling will stop progress anytime soon. Maybe they are confident that scaling up LLMs is a path to AGI. Or maybe they think LLMs will soon be able to take over the jobs of researchers at OpenAI, Anthropic, and Google DeepMind.
I have a hunch (just a hunch) that it’s not a coincidence many people’s predictions are converging (or herding) around 2030, give or take a few years, and that 2029 has been the prophesied year for AGI since Ray Kurzweil’s book The Age of Spiritual Machines in 1999. It could be a coincidence. But I have a sense that there has been a lot of pent-up energy around AGI for a long time and ChatGPT was like a match in a powder keg. I don’t get the sense that people formed their opinions about AGI timelines in 2023 and 2024 from a blank slate.
I think many people have been primed for years by people like Ray Kurzweil and Eliezer Yudkowsky and by the transhumanist and rationalist subcultures to look for any evidence that AGI is coming soon and to treat that evidence as confirmation of their pre-existing beliefs. You don’t have to be directly influenced by these people or by these subcultures to be influenced. If enough people are influenced by them or a few prominent people are influenced, then you end up getting influenced all the same. And when it comes to making predictions, people seem to have a bias toward herding, i.e., making their predictions more similar to the predictions they’ve heard, even if that ends up making their predictions less accurate.
The process by which people come up with the year they think AGI will happen seems especially susceptible to herding bias. You ask yourself when you think AGI will happen. A number pops into your head that feels right. How does this happen? Who knows.
If you try to build a model to predict when AGI will happen, you still can’t get around it. Some of your key inputs to the model will require you to ask yourself a question and wait a moment for a number to pop into your head that feels right. The process by which this happens will still be mysterious. So, the model is ultimately no better than pure intuition because it is pure intuition.
I understand that, in principle, it’s possible to make more rigorous predictions about the future than this. But I don’t think that applies to predicting the development of a hypothetical technology where there is no expert agreement on the fundamental science underlying that technology, and not much in the way of fundamental science in that area at all. That seems beyond the realm of ordinary forecasting.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR [resolution rate] and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
I’m using “exponentially” colloquially to mean every year the LLM’s training dataset grows by 2x or 5x or 10x — something along those lines. Technically, if the training dataset increased by 1% a year, that would be exponential, but let’s not get bogged down in unimportant technicalities.
Epoch AI published a paper in June 2024 that predicts LLMs will exhaust the Internet’s supply of publicly available human-written text between 2026 and 2032.
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s some emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Say what you mean, as plainly as possible.
Try to use words and expressions that a general audience would understand.
Be more casual and less formal if you think that means more people are more likely to understand what you’re trying to say.
To illustrate abstract concepts, give examples.
Where possible, try to let go of minor details that aren’t important to the main point someone is trying to make. Everyone slightly misspeaks (or mis… writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you’re engaging in nitpicking.
When you don’t understand what someone is trying to say, just say that. (And be polite.)
Don’t engage in passive-aggressiveness or code insults in jargon or formal language. If someone’s behaviour is annoying you, tell them it’s annoying you. (If you don’t want to do that, then you probably shouldn’t try to communicate the same idea in a coded or passive-aggressive way, either.)
If you’re using an uncommon word or using a word that also has a more common definition in an unusual way (such as “truthseeking”), please define that word as you’re using it and — if applicable — distinguish it from the more common way the word is used.
Err on the side of spelling out acronyms, abbreviations, and initialisms. You don’t have to spell out “AI” as “artificial intelligence”, but an obscure term like “full automation of labour” or “FAOL” that was made up for one paper should definitely be spelled out.
When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn’t already in the know can more easily understand who or what you’re talking about. For example, instead of just saying “MacAskill” or “Will”, say “Will MacAskill” — just using the full name once per post or comment is plenty. You could also mention someone’s profession (e.g. “philosopher”, “economist”) or the organization they’re affiliated with (e.g. “Oxford University”, “Anthropic”). For organizations, when it isn’t already obvious in context, it might be helpful to give a brief description. Rather than saying, “I donated to New Harvest and still feel like this was a good choice”, you could say “I donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choice”. The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
When in doubt, say it shorter.[1] In my experience, when I take something I’ve written that’s long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
Kindness is fundamental. Maya Angelou said, “At the end of the day people won’t remember what you said or did, they will remember how you made them feel.” Being kind is usually more important than whatever argument you’re having.
Feel free to add your own rules of thumb.
This advice comes from the psychologist Harriet Lerner’s wonderful book Why Won’t You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
I used to feel so strongly about effective altruism. But my heart isn’t in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven’t been able to sustain a vegan diet for more than a short time. And so on.
But there isn’t a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI’s takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or “colonized” effective altruism culture is such a bummer. I know there’s been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or “rationalism” and the more knowledge and experience of it I’ve gained, the more I’ve accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stories about sexual harassment are so disgusting. They’re really, really bad and crazy. And it’s so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.
-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction I’ve seen to instances of racism has been crazymaking. It’s similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. It’s sickening.
-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so… easily fooled.
-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. It’s not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly it’s just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think it’s rare that writing with a more normal diction wouldn’t enhance people’s understanding of what you’re trying to say, even if you’re only trying to communicate with people who are steeped in the effective altruist niche. I don’t think the effective altruist sublanguage is serving good thinking or good communication.
-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is “homegrown” it takes on the force of a scientific idea that’s been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.
-An attitude I could call “EA exceptionalism”, where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isn’t doing the most good, then it isn’t EA. The obvious implication of this argument is that what’s called “EA” might not be true EA, and maybe true EA looks nothing like “EA”. So, this argument is not a defense of the self-identified “EA” movement or community or self-identified “EA” thought.)
-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.
-Too much of the discourse seems to revolve around how to control people’s behaviours or beliefs. It’s a bit too House of Cards. I recently read about the psychologist Kurt Lewin’s study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the women’s was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didn’t have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, it’s also more moral and shows more epistemic humility (you might be wrong about what you believe and that’s one reason not to try to manipulate people into believing it).
-A few more things but this list is already long enough.
Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. I’m not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I don’t know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.
I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today — which is mostly about AGI and has mostly been taken over by the rationalist subculture — that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.
I’d distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they’ve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil’s website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn’t much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors don’t make the forum all about AI—then we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to “balance” the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
On cause prioritization, is there a more recent breakdown of how more and less engaged EAs prioritize? Like an update of this? I looked for this from the 2024 survey but could not find it easily: https://forum.effectivealtruism.org/posts/sK5TDD8sCBsga5XYg/ea-survey-cause-prioritization
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don’t want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldn’t want the movement to discourage anyone who shares its principles (like “let’s use our time and resources to help others the most”), but disagrees with how it’s being put into practice, from actively participating.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow’s comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don’t think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Good point, I guess my lasting impression wasn’t entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn’t feels discouraged from actively participating in EA.
Since my days of reading William Easterly’s Aid Watch blog back in the late 2000s and early 2010s, I’ve always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think it’s worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on people’s intuitions, and people’s intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that aren’t.
I agree with most of you say here, indeed all things being equal a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from anywhere else. The problem is your caveats - things are almost never equal...
1) Education systems just aren’t nearly as good in lower income countries. This means that that education is sadly barely ever equal. Even between low income countries—a Kenyan once joked with me that “a Ugandan degree holder is like a Kenyan high school leaver”. If you look at the top echelon of NGO/Charity leaders from low-income who’s charities have grown and scaled big, most have been at least partially educated in richer countries
2) Ability to network is sadly usually so so much higher if you’re from a higher income country. Social capital is real and insanely important. If you look at the very biggest NGOs, most of them are founded not just by Westerners, but by IVY LEAGUE OR OXBRIDGE EDUCATED WESTERNERS. Paul Farmer (Partners in Health) from Harvard, Raj Panjabi (LastMile Health) from Harvard. Paul Niehaus (GiveDirectly) from Harvard. Rob Mathers (AMF) Harvard AND Cambridge. With those connections you can turn a good idea into growth so much faster even compared to super privileged people like me from New Zealand, let alone people with amazing ideas and organisations in low income countries who just don’t have access to that kind of social capital.
3) The pressures on people from low-income countries are so high to secure their futures, that their own financial security will often come first and the vast majority won’t stay the course with their charity, but will leave when they get an opportunity to further their career. And fair enough too! I’ve seen a number of of incredibly talented founders here in Northern Uganda drop their charity for a high paying USAID job (that ended poorly...), or an overseas study scholarship, or a solid government job. Here’s a telling quote from this great take here by @WillieG
“Roughly a decade ago, I spent a year in a developing country working on a project to promote human rights. We had a rotating team of about a dozen (mostly) brilliant local employees, all college-educated, working alongside us. We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded. I got nostalgic and looked up my old colleagues recently. Every single one is living in the West now. A few are still somewhat involved in human rights, but most are notably under-employed (a lawyer washing dishes in a restaurant in Virginia, for example”
https://forum.effectivealtruism.org/posts/tKNqpoDfbxRdBQcEg/?commentId=trWaZYHRzkzpY9rjx
I think (somewhat sadly) a good combination can be for co-founders or co-leaders to be one person from a high-income country with more funding/research connections, and one local person who like you say will be far more effective at understanding the context and leading in locally-appropriate ways. This synergy can cover important bases, and you’ll see a huge number of charities (including mine) founded along these lines.
These realities makes me uncomfortable though, and I wish it weren’t so. As @Jeff Kaufman 🔸 said “I can’t reject my privilege, I can’t give it back” so I try and use my privilege as best as possible to help lift up the poorest people. The organisation OneDay Health I co-founded has me as the only employed foreigner, and 65 other local staff.
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again, you have to face the wretched masses of humanity and say “me too, me too, me too” (and realize you are one of them).
I am a total believer in the second philosophy and a hater of the first philosophy. (Not because it’s easy, but because it’s right!) To the extent I care about effective altruism, it’s because of the second philosophy: expand the moral circle, value all lives equally, extend beyond national borders, consider non-human creatures.
When I see people in effective altruism evince the first philosophy, to me, this is a profane betrayal of the whole point of the movement.
One of the reasons (among several other important reasons) that rationalists piss me off so much is their whole worldview and subculture is based on the first philosophy. Even the word “rationalist” is about being superior to other people. If the rationalist community has one founder or leader, it would be Eliezer Yudkowsky. The way Eliezer Yudkowsky talks to and about other people, even people who are actively trying to help him or to understand him, is so hateful and so mean. He exhales contempt. And it isn’t just Eliezer — you can go on LessWrong and read horrifying accounts of how some prominent people in the community have treated their employee or their romantic partner, with the stated justification that they are separate from and superior to others. Obviously there’s a huge problem with racism, sexism, and anti-LGBT prejudice too, which are other ways of feeling separate and above.
There is no happiness to be found at the top of a hierarchy. Look at the people who think in the most hierarchical terms, who have climbed to the tops of the hierarchies they value. Are they happy? No. They’re miserable. This is a game you can’t win. It’s a con. It’s a lie.
In the beautiful words of the Franciscan friar Richard Rohr, “The great and merciful surprise is that we come to God not by doing it right but by doing it wrong!”
(Richard Rohr’s episode of You Made It Weird with Pete Holmes is wonderful if you want to hear more.)
Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried’s behaviour years before the FTX collapse?
Will responded here.
Here is the situation we’re in with regard to near-term prospects for artificial general intelligence (AGI). This is why I’m extremely skeptical of predictions that we’ll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can’t score above 5% on the ARC-AGI-2 benchmark, they can’t automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying something that happened in 2025 caused something that happened in 2024, while listing the dates of the events. They struggle with things that are easy for humans, like playing hangman.
-The capabilities of LLMs have been improving slowly. There is only a modest overall difference between GPT-3.5 (the original ChatGPT model), which came out in November 2022, and newer models like GPT-4o, o4-mini, and Gemini 2.5 Pro.
-There are signs that there are diminishing returns to scaling for LLMs. Increasing the size of models and the size of the pre-training data doesn’t seem to be producing the desired results anymore. LLM companies have turned to scaling test-time compute to eke out more performance gains, but how far can that go?
-There may be certain limits to scaling that are hard or impossible to overcome. For example once you’ve trained a model on all the text that exists in the world, you can’t keep training on exponentially[3] more text every year. Current LLMs might be fairly close to running out of exponentially[4] more text to train on, if they haven’t run out already.[5]
-A survey of 475 AI experts found that 76% think it’s “unlikely” or “very unlikely” that “scaling up current AI approaches” will lead to AGI. So, we should be skeptical of the idea that just scaling up LLMs will lead to AGI, even if LLM companies manage to keep scaling them up and improving their performance by doing so.
-Few people have any concrete plan for how to build AGI (beyond just scaling up LLMs). The few people who do have a concrete plan disagree fundamentally on what the plan should be. All of these plans are in the early-stage research phase. (I listed some examples in a comment here.)
-Some of the scenarios people are imagining where we get to AGI in the near future involve strange, exotic, hypothetical process wherein a sub-AGI AI system can automate the R&D that gets us from a sub-AGI AI system to AGI. This requires two things to be true: 1) that doing the R&D needed to create AGI is not a task that would require AGI or human-level AI and 2) that, in the near term, AI systems somehow advance to the point where they’re able to do meaningful R&D autonomously. Given that I can’t even coax o4-mini or Gemini 2.5 Pro into playing hangman properly, and given the slow improvement of LLMs and the signs of diminishing returns to scaling I mentioned, I don’t see how (2) could be true. The arguments for (1) feel very speculative and handwavy.
Given all this, I genuinely can’t understand why some people think there’s a high chance of AGI within 5 years. I guess the answer is they probably disagree on most or all of these individual points.
Maybe they think the conventional written question and answer benchmarks for LLMs are fair apples-to-apples comparisons of machine intelligence and human intelligence. Maybe they are really impressed with the last 2 to 2.5 years of progress in LLMs. Making they are confident no limits to scaling or diminishing returns to scaling will stop progress anytime soon. Maybe they are confident that scaling up LLMs is a path to AGI. Or maybe they think LLMs will soon be able to take over the jobs of researchers at OpenAI, Anthropic, and Google DeepMind.
I have a hunch (just a hunch) that it’s not a coincidence many people’s predictions are converging (or herding) around 2030, give or take a few years, and that 2029 has been the prophesied year for AGI since Ray Kurzweil’s book The Age of Spiritual Machines in 1999. It could be a coincidence. But I have a sense that there has been a lot of pent-up energy around AGI for a long time and ChatGPT was like a match in a powder keg. I don’t get the sense that people formed their opinions about AGI timelines in 2023 and 2024 from a blank slate.
I think many people have been primed for years by people like Ray Kurzweil and Eliezer Yudkowsky and by the transhumanist and rationalist subcultures to look for any evidence that AGI is coming soon and to treat that evidence as confirmation of their pre-existing beliefs. You don’t have to be directly influenced by these people or by these subcultures to be influenced. If enough people are influenced by them or a few prominent people are influenced, then you end up getting influenced all the same. And when it comes to making predictions, people seem to have a bias toward herding, i.e., making their predictions more similar to the predictions they’ve heard, even if that ends up making their predictions less accurate.
The process by which people come up with the year they think AGI will happen seems especially susceptible to herding bias. You ask yourself when you think AGI will happen. A number pops into your head that feels right. How does this happen? Who knows.
If you try to build a model to predict when AGI will happen, you still can’t get around it. Some of your key inputs to the model will require you to ask yourself a question and wait a moment for a number to pop into your head that feels right. The process by which this happens will still be mysterious. So, the model is ultimately no better than pure intuition because it is pure intuition.
I understand that, in principle, it’s possible to make more rigorous predictions about the future than this. But I don’t think that applies to predicting the development of a hypothetical technology where there is no expert agreement on the fundamental science underlying that technology, and not much in the way of fundamental science in that area at all. That seems beyond the realm of ordinary forecasting.
This post discusses LLMs and labour automation in the section “Real-World Adoption”.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
I’m using “exponentially” colloquially to mean every year the LLM’s training dataset grows by 2x or 5x or 10x — something along those lines. Technically, if the training dataset increased by 1% a year, that would be exponential, but let’s not get bogged down in unimportant technicalities.
Yup, still using it colloquially.
Epoch AI published a paper in June 2024 that predicts LLMs will exhaust the Internet’s supply of publicly available human-written text between 2026 and 2032.
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s some emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.