Pronouns: she/âher or they/âthem.
I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now Iâm trying to figure out where effective altruism can fit into my life these days and what it means to me.
Yarrowđ¸
I looked at every link in this post and the most useful one for me was this one where you list off examples of uncomfortable cross-cultural interactions from your interviewees. Especially seeing all the examples together rather than just one or two.
Iâm a Westerner, but Iâm LGBT and a feminist, so Iâm familiar with analogous social phenomena. Instances of discrimination or prejudice often have a level of ambiguity. Was that person dismissive toward me because of my identity characteristics or are they just dismissive toward everyone⌠or were they in a bad moodâŚ? You form a clearer picture when you add up multiple experiences, and especially experiences from multiple people. Thatâs when you start to see a pattern.
As a person in an identity group that is discriminated against, sometimes you can have a weird feeling that, statistically, you know discrimination is happening, but you donât know for sure exactly which events are discrimination and which arenât. Some instances of discrimination are more clear â such as someone invoking a trope or clichĂŠ about your group â but any individual instance of someone talking over you, disregarding your opinion, not taking an interest in you, not giving you time to speak, and so on, is theoretically consistent with someone being generally rude or disliking you personally. Stepping back and seeing the pattern is what makes all the difference.
This might be the most important thing that people who do not experience discrimination donât understand. Some people think that people who experience discrimination are just overly sensitive or are overreacting or are seeing malicious intent where it doesnât exist. Since so many individual examples of discrimination or potential discrimination can be explained away as someone being generally rude, or in a bad mood, or just not liking someone personally â or whatever â it is possible to deny that discrimination exists, or at least that it exists to the extent that people are claiming.
But discerning causality in the real world is not always so clean and simple and obvious â thatâs why we need clinical trials for drugs, for example â and the world of human interaction is especially complex and subtle.
You could look at any one example on the list you gave and try to explain it away. I got the sense that your interviewees shared this sense of ambiguity. For example: âL felt uncertain about what factors contributed to that dynamic, but they suspected the difference in culture may play a part.â When you see all the examples collected together, from the experiences of several different people, it is much harder to explain it all away.
You could claim that itâs wrong of me to only give one of my children a banana, even if thatâs the only child whoâs hungry. Some would say I should always split that banana in half, for egalitarian reasons. This is in stark contrast to EA and hard to rebut respectfully with rigor.
In an undergrad philosophy class, the way my prof described examples like this is as being about equality of regard or equality of concern. For example, if there are two nearby cities and one gets hit by a hurricane, the federal government is justified in sending aid just to the city thatâs been damaged by the hurricane, rather than to both cities in order to be âfairâ. It is fair. The government is responding equally the needs of all people. The people who got hit by the hurricane are more in need of help.
Sam wrote the letter below to our employees and stakeholders about why we are so excited for this new direction.
God. Sam Altman didnât get to do what he wanted, and now weâre supposed to believe heâs âexcitedâ? This corporate spin is driving me crazy!
But, that aside, Iâm glad OpenAI has backed down, possibly because the Attorney General of Delaware or California, or both of them, told OpenAI they would block Samâs attempt to break the OpenAI company free from the non-profitâs control.
It seems more likely to me that OpenAI gave up because they had to give up, although this blog post is trying to spin it as if they changed their minds (which I doubt really happened).
Truly a brash move to try to betray the non-profit.
Once again Sam is throwing out gigantic numbers for the amounts of capital he theoretically wants to raise:We want to be able to operate and get resources in such a way that we can make our services broadly available to all of humanity, which currently requires hundreds of billions of dollars and may eventually require trillions of dollars.
I wonder if his reasoning is that everyone in the world will use ChatGPT, so he multiplies the hardware cost of running one instance of GPT-5 by the world population (8.2 billion), and then adjusts down for utilization. (People gotta sleep and canât use ChatGPT all day! Although maybe theyâll run deep research overnight.)
Looks like the lede was buried:Instead of our current complex capped-profit structureâwhich made sense when it looked like there might be one dominant AGI effort but doesnât in a world of many great AGI companiesâwe are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.
The nonprofit will continue to control the PBC, and will become a big shareholder in the PBC, in an amount supported by independent financial advisors, giving the nonprofit resources to support programs so AI can benefit many different communities, consistent with the mission.
At first, I thought this meant the non-profit will go from owning 51% of the company (or whatever it is) to a much smaller percentage. But I tried to confirm this and found an article that claims the OpenAI non-profit only owns 2% of the OpenAI company. I donât know whether thatâs true. I canât find clear information on the size of the non-profitâs ownership stake.
The data in this paper comes from the 2006 paper âDisease Control Priorities in Developing Countriesâ.
I donât understand. Does this paper not support the claim?
Iâve actually never heard this claim before, personally. Instead, people like Toby Ord talked about how the cost of curing someoneâs blindness through the Fred Hollows Foundation was 1,000x cheaper than training a seeing eye dog.
Here is the situation weâre in with regard to near-term prospects for artificial general intelligence (AGI). This is why Iâm extremely skeptical of predictions that weâll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they canât score above 5% on the ARC-AGI-2 benchmark, they canât automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying something that happened in 2025 caused something that happened in 2024, while listing the dates of the events. They struggle with things that are easy for humans, like playing hangman.
-The capabilities of LLMs have been improving slowly. There is only a modest overall difference between GPT-3.5 (the original ChatGPT model), which came out in November 2022, and newer models like GPT-4o, o4-mini, and Gemini 2.5 Pro.
-There are signs that there are diminishing returns to scaling for LLMs. Increasing the size of models and the size of the pre-training data doesnât seem to be producing the desired results anymore. LLM companies have turned to scaling test-time compute to eke out more performance gains, but how far can that go?
-There may be certain limits to scaling that are hard or impossible to overcome. For example once youâve trained a model on all the text that exists in the world, you canât keep training on exponentially[3] more text every year. Current LLMs might be fairly close to running out of exponentially[4] more text to train on, if they havenât run out already.[5]
-A survey of 475 AI experts found that 76% think itâs âunlikelyâ or âvery unlikelyâ that âscaling up current AI approachesâ will lead to AGI. So, we should be skeptical of the idea that just scaling up LLMs will lead to AGI, even if LLM companies manage to keep scaling them up and improving their performance by doing so.
-Few people have any concrete plan for how to build AGI (beyond just scaling up LLMs). The few people who do have a concrete plan disagree fundamentally on what the plan should be. All of these plans are in the early-stage research phase. (I listed some examples in a comment here.)
-Some of the scenarios people are imagining where we get to AGI in the near future involve strange, exotic, hypothetical process wherein a sub-AGI AI system can automate the R&D that gets us from a sub-AGI AI system to AGI. This requires two things to be true: 1) that doing the R&D needed to create AGI is not a task that would require AGI or human-level AI and 2) that, in the near term, AI systems somehow advance to the point where theyâre able to do meaningful R&D autonomously. Given that I canât even coax o4-mini or Gemini 2.5 Pro into playing hangman properly, and given the slow improvement of LLMs and the signs of diminishing returns to scaling I mentioned, I donât see how (2) could be true. The arguments for (1) feel very speculative and handwavy.
Given all this, I genuinely canât understand why some people think thereâs a high chance of AGI within 5 years. I guess the answer is they probably disagree on most or all of these individual points.
Maybe they think the conventional written question and answer benchmarks for LLMs are fair apples-to-apples comparisons of machine intelligence and human intelligence. Maybe they are really impressed with the last 2 to 2.5 years of progress in LLMs. Making they are confident no limits to scaling or diminishing returns to scaling will stop progress anytime soon. Maybe they are confident that scaling up LLMs is a path to AGI. Or maybe they think LLMs will soon be able to take over the jobs of researchers at OpenAI, Anthropic, and Google DeepMind.
I have a hunch (just a hunch) that itâs not a coincidence many peopleâs predictions are converging (or herding) around 2030, give or take a few years, and that 2029 has been the prophesied year for AGI since Ray Kurzweilâs book The Age of Spiritual Machines in 1999. It could be a coincidence. But I have a sense that there has been a lot of pent-up energy around AGI for a long time and ChatGPT was like a match in a powder keg. I donât get the sense that people formed their opinions about AGI timelines in 2023 and 2024 from a blank slate.
I think many people have been primed for years by people like Ray Kurzweil and Eliezer Yudkowsky and by the transhumanist and rationalist subcultures to look for any evidence that AGI is coming soon and to treat that evidence as confirmation of their pre-existing beliefs. You donât have to be directly influenced by these people or by these subcultures to be influenced. If enough people are influenced by them or a few prominent people are influenced, then you end up getting influenced all the same. And when it comes to making predictions, people seem to have a bias toward herding, i.e., making their predictions more similar to the predictions theyâve heard, even if that ends up making their predictions less accurate.
The process by which people come up with the year they think AGI will happen seems especially susceptible to herding bias. You ask yourself when you think AGI will happen. A number pops into your head that feels right. How does this happen? Who knows.
If you try to build a model to predict when AGI will happen, you still canât get around it. Some of your key inputs to the model will require you to ask yourself a question and wait a moment for a number to pop into your head that feels right. The process by which this happens will still be mysterious. So, the model is ultimately no better than pure intuition because it is pure intuition.
I understand that, in principle, itâs possible to make more rigorous predictions about the future than this. But I donât think that applies to predicting the development of a hypothetical technology where there is no expert agreement on the fundamental science underlying that technology, and not much in the way of fundamental science in that area at all. That seems beyond the realm of ordinary forecasting.
- ^
This post discusses LLMs and labour automation in the section âReal-World Adoptionâ.
- ^
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat qualityâRR [resolution rate] and customer satisfactionâsuggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
- ^
Iâm using âexponentiallyâ colloquially to mean every year the LLMâs training dataset grows by 2x or 5x or 10x â something along those lines. Technically, if the training dataset increased by 1% a year, that would be exponential, but letâs not get bogged down in unimportant technicalities.
- ^
Yup, still using it colloquially.
- ^
Epoch AI published a paper in June 2024 that predicts LLMs will exhaust the Internetâs supply of publicly available human-written text between 2026 and 2032.
- ^
In that case, I apologize. I donât know you and I donât know your background or intentions, and apparently I was wrong about both.
I think the experience youâre describing â of feeling a sense of guilt or grief or sadness or obligation thatâs so big you donât know how to handle it â is something that probably the majority of people who have participated in the effective altruist movement have felt at one time or another. Iâve seen many people describe feeling this way, both online and in real life.
When I was an organizer at my universityâs effective altruist group, several of the friends I made through that group expressed these kinds of feelings. This stuff weighed on us heavily.
I havenât read the book Strangers Drowning, but Iâve heard it described, and I know itâs about people who go to extreme lengths to answer the call of moral obligation. Maybe that book would interest you. I donât know.
This topic goes beyond the domain of ethical theory into a territory that is different parts existential, spiritual, and psychotherapeutic. It can be dangerous not to handle this topic with care because it can get out of control. It can contribute to clinical depression and anxiety, it can motivate people to inflict pain on others, or people can become overzealous, overconfident, and adopt an unfair sense of superiority to other people.
I find it useful to draw on examples from fantasy and sci-fi to think about this sort of thing. In the Marvel universe, the Infinity Stones can only be wielded safely by god-like beings and normal humans or mortals die when they try to use them. The Stones even pose a danger to some superhuman beings, like Thanos and the Hulk. In Star Trek: Picard, there is an ancient message left by an unknown, advanced civilization. When people try to watch/âlisten to the message, it drives most of them to madness. There are other examples of this sort of thing â something so powerful that coming into contact with it, even coming near it, is incredibly dangerous.
To try to reckon with the suffering of the whole world is like that. Not impossible, not something to be avoided forever, but something dangerous to be approached with caution. People who approach it recklessly can destroy themselves, destroy others, or succumb to madness.
There is a connection between reckoning with the worldâs suffering and oneâs own personal suffering. In two different ways. First, how we think and feel about one influences how we think and feel about the other. Second, I think a lot of the wisdom about how people should reckon with their own suffering probably applies well to reckoning with the worldâs suffering. With someoneâs personal trauma or grief, we know (or at least people who go to therapy know) that itâs important for that person to find a safe container to express their thoughts and feelings about it. Talking about it just anywhere or to just anyone, without regard for whether thatâs a safe container, is unsafe and unwise.
We know that â after the initial shock of a loss or a traumatic event â it isnât healthy for a person to focus on their trauma or grief all the time, to the exclusion of other things. But trying to completely avoid forever it isnât a good strategy either.We know that the path is never simple, clean, or easy. Connection to other people who have been through or who are going through similar things is often helpful, as is the counsel of a helping professional like a therapist or social worker (or in some cases a spiritual or religious leader), but the help doesnât come in the form of outlining a straightforward step-by-step process. What helps someone reckon with or make sense of their own emotional suffering is often personal to that individual and not generally applicable.
For example, in the beautiful â and unfairly maligned â memoir Eat, Pray, Love, Elizabeth Gilbert talks about a point in her life when she feels completely crushed, and when sheâs seriously, clinically unwell. She describes how when nothing else feels enjoyable or interesting, she discovers desperately needed pleasure in learning Italian.
I donât think in the lowest times of my life I would find any pleasure in learning Italian. I donât think in the best or most mediocre times of my life I would find any pleasure in learning Italian. The specific thing that helps is usually not generalizable to everyone whoâs suffering (which, ultimately, is everyone) and is usually not predictable in advance, including by the person who it ends up helping.
So, the question of how to face the worldâs darkness or the worldâs suffering, or how to recover from a breakdown when the worldâs darkness or suffering seems too much, is an answerable question, but itâs not answerable in a universal, simple, or direct way. Itâs about your relationship with the universe, which is something for you and the universe to figure out.
As I indicated above, I like to take things from fantasy and sci-fi to make sense of the world. In The Power of Myth, Joseph Campbell laments that society lacks modern myths. He names Star Wars as the rare exception. (Return of the Jedi came out a few years before The Power of Myth was recorded.) Nowadays, there are lots of modern myths, if you count things like Star Trek, Marvel, X-Men, and Dungeons & Dragons.
I also rely a lot on spiritual and religious teachings. This episode of the RobCast with Rob Bell is relevant to this topic and a great episode. Another great episode, also relevant, is âLight Heavy Lightâ.
In the more psychotherapeutic realm, I love everything BrenĂŠ Brown has done â her books, her TV show, her audio programs, her TED Talks. Iâve never heard her directly talk about global poverty, but she talks about so much that is relevant to the questions you asked in one way or another. In her book Rising Strong, she talks about her emotional difficulty facing (literally and figuratively) the people in her city who are homeless. Initially, she decided what she needed to do to resolve this emotional difficulty was to do more to help. She did, but she didnât feel any differently. This led to a deeper exploration.In her book Braving the Wilderness, she talks about how she processed collective tragedies like the Challenger disaster and the killings of the kids and staff at Sandy Hook Elementary School. This is what youâre asking about â how to process grief over tragedies that are collective and shared by the world, not personal just to you.
Finally, a warning. In my opinion, a lot of people in effective altruism, including on the Effective Altruism Forum, have not found healthy ways of reckoning with the suffering of the world. There are a few who are so broken by the suffering of the world that they believe life was a mistake and we would be better off returning to non-existence. (In the Dungeons & Dragons lore, these people would be like the worshippers of Shar.) Many are swept up in another kind of madness: eschatological prophecies around artificial general intelligence. Many numb, detach, or intellectualize rather than feel. A lot of energy goes into fighting.
So, the wisdom you are seeking you will probably not find here. You will find good debates on charity effectiveness. Maybe some okay discussions of ethical theory. Not wisdom on how to deal with the human condition.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Say what you mean, as plainly as possible.
Try to use words and expressions that a general audience would understand.
Be more casual and less formal if you think that means more people are more likely to understand what youâre trying to say.
To illustrate abstract concepts, give examples.
Where possible, try to let go of minor details that arenât important to the main point someone is trying to make. Everyone slightly misspeaks (or mis⌠writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that youâre engaging in nitpicking.
When you donât understand what someone is trying to say, just say that. (And be polite.)
Donât engage in passive-aggressiveness or code insults in jargon or formal language. If someoneâs behaviour is annoying you, tell them itâs annoying you. (If you donât want to do that, then you probably shouldnât try to communicate the same idea in a coded or passive-aggressive way, either.)
If youâre using an uncommon word or using a word that also has a more common definition in an unusual way (such as âtruthseekingâ), please define that word as youâre using it and â if applicable â distinguish it from the more common way the word is used.
Err on the side of spelling out acronyms, abbreviations, and initialisms. You donât have to spell out âAIâ as âartificial intelligenceâ, but an obscure term like âfull automation of labourâ or âFAOLâ that was made up for one paper should definitely be spelled out.
When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isnât already in the know can more easily understand who or what youâre talking about. For example, instead of just saying âMacAskillâ or âWillâ, say âWill MacAskillâ â just using the full name once per post or comment is plenty. You could also mention someoneâs profession (e.g. âphilosopherâ, âeconomistâ) or the organization theyâre affiliated with (e.g. âOxford Universityâ, âAnthropicâ). For organizations, when it isnât already obvious in context, it might be helpful to give a brief description. Rather than saying, âI donated to New Harvest and still feel like this was a good choiceâ, you could say âI donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choiceâ. The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
When in doubt, say it shorter.[1] In my experience, when I take something Iâve written thatâs long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
Kindness is fundamental. Maya Angelou said, âAt the end of the day people wonât remember what you said or did, they will remember how you made them feel.â Being kind is usually more important than whatever argument youâre having.
Feel free to add your own rules of thumb.
- ^
This advice comes from the psychologist Harriet Lernerâs wonderful book Why Wonât You Apologize? â given in the completely different context of close personal relationships. I think it also works here.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Say what you mean, as plainly as possible.
Try to use words and expressions that a general audience would understand.
Be more casual and less formal if you think that means more people are more likely to understand what youâre trying to say.
To illustrate abstract concepts, give examples.
Where possible, try to let go of minor details that arenât important to the main point someone is trying to make. Everyone slightly misspeaks (or mis⌠writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that youâre engaging in nitpicking.
When you donât understand what someone is trying to say, just say that. (And be polite.)
Donât engage in passive-aggressiveness or code insults in jargon or formal language. If someoneâs behaviour is annoying you, tell them itâs annoying you. (If you donât want to do that, then you probably shouldnât try to communicate the same idea in a coded or passive-aggressive way, either.)
If youâre using an uncommon word or using a word that also has a more common definition in an unusual way (such as âtruthseekingâ), please define that word as youâre using it and â if applicable â distinguish it from the more common way the word is used.
Err on the side of spelling out acronyms, abbreviations, and initialisms. You donât have to spell out âAIâ as âartificial intelligenceâ, but an obscure term like âfull automation of labourâ or âFAOLâ that was made up for one paper should definitely be spelled out.
When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isnât already in the know can more easily understand who or what youâre talking about. For example, instead of just saying âMacAskillâ or âWillâ, say âWill MacAskillâ â just using the full name once per post or comment is plenty. You could also mention someoneâs profession (e.g. âphilosopherâ, âeconomistâ) or the organization theyâre affiliated with (e.g. âOxford Universityâ, âAnthropicâ). For organizations, when it isnât already obvious in context, it might be helpful to give a brief description. Rather than saying, âI donated to New Harvest and still feel like this was a good choiceâ, you could say âI donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choiceâ. The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
When in doubt, say it shorter.[1] In my experience, when I take something Iâve written thatâs long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
Kindness is fundamental. Maya Angelou said, âAt the end of the day people wonât remember what you said or did, they will remember how you made them feel.â Being kind is usually more important than whatever argument youâre having.
(Decided to also publish this as a quick take, since itâs so generally applicable.)
- ^
This advice comes from the psychologist Harriet Lernerâs wonderful book Why Wonât You Apologize? â given in the completely different context of close personal relationships. I think it also works here.
I find this completely inscrutable. Iâm not saying thereâs anything wrong with it in terms of accuracy, itâs just way too in the weeds of the statistics for me to decipher whatâs going on.
For example, I donât know what a âmiddle halfâ or a âcentral halfâ is. I looked them up and now I know they are statistics terms, but it would be a lot of work for me to try to figure out what that quoted paragraph is trying to say.
Is AI Impacts going to run this survey again soon? Maybe they can phrase the questions differently in a new survey to avoid this level of confusion between different levels of AI capabilities.
I think if you donât note in the body of the text, rather than just in a footnote, that just anybody can predict anything on Metaculus, then this will inevitably be misleading to anyone who doesnât already know what Metaculus is, since you imply in the post that itâs an aggregator of expert predictions when you claim:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And then go on to list Metaculus as support for this claim. That implies Metaculus is aggregator of expert predictions.
Also, you include Metaculus in a long list of expert predictions without noting that itâs different from the other items on the list, which reinforces the implication that itâs an aggregator of expert predictions.
I think you should also explain what Samotsvety is in the body of the text and what its forecastersâ credentials are.
Invoking âexpertsâ and then using the term this loosely feels misleading.
I think it also bears mentioning the strange feature of the 2023 AI Impacts survey where thereâs a 69-year gap between the AI expertsâ prediction of âhigh-level machine intelligenceâ and âfull automation of labourâ (50% chance by 2116). This is such an important (and weird, and confusing) fact about the survey that I think it should be mentioned anytime that survey is brought up.
This is especially relevant since you say:
On the whole, experts think human-level AI is likely to arrive in your lifetime.
And if you think human-level AI means full automation of labour rather than high-level machine intelligence, then a 50% chance by 2116 (91 years from now) is not within the current life expectancy of most adults alive today or even most teenagers.
There is some ambiguity in claims about whether an LLM knows how to do something. The spectrum of knowing how to do things ranges all the way from âCan it do it at least once, ever?â to âDoes it do it reliably, every time, without fail?â.
My experience was that I tried to play hangman with o4-mini twice and it failed both times in the same really goofy way, where it counted my guesses wrong when I guessed a letter that was in the word it later said I was supposed to be guessing.
When I played the game with o4-mini where it said the word was âbutterflyâ (and also said there was no âBâ in the word when I guessed âBâ), I didnât prompt it to make the word hard. I just said, after it claimed to have picked the word:
âE. Also, give me a vague hint or a general category.â
o4-mini said:
âItâs an animal.â
So, maybe asking for a hint or a category is the thing that causes it to fail. I donât know.
Even if I accepted the idea that the LLM âwants me to loseâ (which sounds dubious to me), then it doesnât know how to do that properly, either. In the âbutterflyâ example, it could, in theory, have chosen a word retroactively that filled in the blanks but didnât conflict with any guesses it said were wrong. But it didnât do that.
In the attempt where the word was âschmaltzinessâ, o4-miniâs response about which letters were where in the word (which I pasted in a footnote to my previous comment) was borderline incoherent. I could hypothesize that this was part of a secret strategy on its part to follow my directives, but much more likely, I think, is that it just lacks the capability to execute the task reliably.
Fortunately, we donât have to dwell on hangman too much, since there are rigorous benchmarks like ARC-AGI-2 that show more conclusively the reasoning abilities of o3 and o4-mini are poor compared to typical humans.
I havenât looked at any surveys, but it seems universal to care about future generations. This doesnât mean people will necessarily act in a way that protects future generationsâ interests â doesnât mean they wonât pollute or deforest, for example â but the idea is not controversial and is widely accepted.
Similarly, I think itâs basically universal to believe that all humans, in principle, have some value and have certain rights that should not be violated, but then, in practice, factors like racism, xenophobia, hatred based on religious fundamentalism, anti-LGBT hatred, etc. lead many people to dehumanize certain humans. There is typically an attempt to morally justify this, though, for example through appeals to âself-defenseâ (or similar concepts).
If you apply strict standards to the belief that everyone alive today is worthy of moral concern, then some self-identified effective altruists would fail the test, since they hold dehumanizing views about Black people, LGBT people, women, etc.
Thatâs getting into a different point than I was trying to make in the chunk of text you quoted. Which is just that Will MacAskill didnât fall out of a coconut tree and come up with the idea that future generations matter yesterday. His university, Oxford, is over 900 years old. I believe in his longtermism book he cites the Iroquois principle of making decisions while considering how they will affect the next seven generations. Historically, many (most?) families on Earth have had close relationships between grandparents and grandchildren. Passing down tradition and transmitting culture (e.g., stories, rituals, moral principles) over long timescales is considered important in many cultures and religions.
There is a risk of a sort of plagiarism with this kind of discourse where people take ideas that have existed for centuries or millennia across many parts of the world and then package them as if they are novel, without adequately acknowledging the history of the ideas. Thatâs like the effective altruistâs or the ethical theoristâs version of ânot invented hereâ.
I guess my mistake was interpreting your quick take as a sincere question rather than a rhetorical question. I now realize you were asking a rhetorical question in order to make an argument and not actually asking for people to try to answer your question.
My initial interpretation â the reason why I replied â was that you were feeling a lot of guilt about your level of moral responsibility for or complicity in global poverty and the harm it does to people. I wanted to help alleviate your guilt, which I think when taken to such an extreme can be paralyzing and counterproductive. Iâve seen no evidence it actually helps anyone and lots of evidence of it doing harm.
I already tried to make several points in my previous comment. Iâll try to make one more.
You say there is âhidden violenceâ in the world economic system. Well, knowledge is a component of moral culpability. A famous line from the Watergate scandal was this question a U.S. Senator asked about Richard Nixon: âWhat did the president know and when did he know it?â The extent to which you know about something affects how morally culpable you are.
There is another layer of complexity beyond this. For example, there is the concept in law of willful ignorance. If you get involved in something that you know or have reasonable grounds to believe is criminal activity and choose not to know certain details in order to try to protect yourself from legal liability, this will probably not hold up as a legal defense and you will probably still be held criminally liable.
But I think it would be a stretch to try to apply the concept of âwillful ignoranceâ to global poverty or the world economic system, since peopleâs ignorance of the âhidden violenceâ you describe â if it indeed exists â is genuine and not a ruse to try to avoid culpability.
The moral culpability of normal Germans in the 1930s and 1940s is a complex topic that requires knowing a lot about this time and place in history â which I do not. I think everyone would agree that, for example, a child forced to join the Hitler Youth has a lot less moral culpability than someone with a leadership position in the Nazi Party. So, there is some ambiguity in the term âNaziâ that you have to reckon with to discuss this topic.
But I donât think it is ethical to drag this complex discussion about this period in history into a debate about effective altruism.
Nazi analogies should be used with a lot of sensitivity and care. By invoking Nazi crimes against humanity in order to try to make some rhetorical point about an unrelated topic, you risk diminishing the importance of these grim events and disrespecting the victims. There are hundreds of thousands of Holocaust survivors alive today. There are many Jewish families who lost relatives in the Holocaust. Many families are affected by the intergenerational trauma of the Holocaust. It seems completely disrespectful to them to try to turn their suffering and loss into effective altruist rhetoric.
So, I have indulged the Nazi analogies enough. I will not entertain this any more.
If you want to make an argument that there are high moral demands on us to respond humanely to global poverty, many people â such as Peter Singer, as I mentioned in my previous comment â have argued this using vivid analogies that have captured peopleâs imaginations and has helped persuade many of them (including me) to try to do more to help the globally poor.
A lot of people within the effective altruist movement seem to basically agree with you. For example, Will MacAskill, one of the founders of the effective altruist movement, has recently said heâs only going to focus on artificial general intelligence (AGI) from now on. The effective altruist organization 80,000 Hours has said more or less the same â their main focus is going to be AGI. For many others in the EA movement, AGI is their top priority and the only thing they focus on.
So, basically, you are making an argument for which there is already a lot of agreement in EA circles.
As you pointed out, uncertainty about the timeline of AGI and doubts about very near-term AGI are one of the main reasons to focus on global poverty, animal welfare, or other cause areas not related to AGI.
There is no consensus on when AGI will happen.
A 2023 survey of AI experts found they believed there is a 50% chance of AI and AI-powered robots being able to automate all human jobs by 2116. (Edited on 2025-05-05 at 06:16 UTC: I should have mentioned the same study also asked the experts when they think AI will be able to do all tasks that a human can do. The aggregated prediction was a 50% chance by 2047. We donât know for sure why they gave such different predictions for these two similar questions.)
In 2022, a group of 31 superforecasters predicted a 50% chance of AGI by 2081.
My personal belief is that we have no idea how to create AGI and we have no idea when weâll figure out how to create it. In addition to the expert and superforecaster predictions I just mentioned, I recently wrote a rapid fire list of reasons I think predictions of AGI within 5 years are extremely dubious.
I agree that itâs a significant milestone, or at least it might be. I just read this comment a few hours ago (and the Twitter thread it links to) and that dampens my enthusiasm. 43 million words to solve one ARC-AGI-1 puzzle is a lot.
Also, I want to understand more about how ARC-AGI-2 is different from ARC-AGI-1. Chollet has said that about half of the tasks in ARC-AGI-1 turned out to be susceptible to âbrute forceâ-type approaches. I donât know what that means.
I think itâs easy to get carried away with the implications of a result like this when youâre surrounded by so many voices saying that AGI is coming within 5 years or within 10 years.
My response to François Cholletâs comments on o3â˛s high score on ARC-AGI-1 was more like, âOh, thatâs really interesting!â rather than making some big change to my views on AGI. I have to say, I was more excited about it before I knew it took 43 million words of text and over 1,000 attempts per task.
I still think no one knows how to build AGI and that (not unrelatedly) we donât know when AGI will be built.
Chollet recently started a new company focused on combining deep learning and program synthesis. Thatâs interesting. He seems to think the major AI labs like OpenAI and Google DeepMind are also working on program synthesis, but I donât know how much publicly available evidence there is for this.
I can add Cholletâs company to the list of organizations that I know of that have publicly discussed theyâre doing R&D related to AGI other than just scaling LLMs. The others I know of:
The Alberta Machine Intelligence Institute and Keen Technologies, both organizations where Richard Sutton is a key person and which (if I understand correctly) are pursuing at least to some extent Suttonâs âAlberta Plan for AI Researchâ
Numenta, a company co-founded by Jeff Hawkins, who has made aggressive statements about Numentaâs ability to develop AGI in the not-too-distant future using insights from neuroscience (the main insights they think theyâve found are described here)
Yann LeCunâs team at Meta AI, formerly FAIR; LeCun has published a roadmap to AGI, except he doesnât call it AGI
I might be forgetting one or two. I know in the past Demis Hassabis has made some general comments about DeepMindâs research related to AGI, but I donât know of any specifics.
My gut sense is that all of these approaches will fail â program synthesis combined with deep learning, the Alberta Plan, Numentaâs Thousand Brains Principles, and Yann LeCunâs roadmap. But this is just a random gut intuition and not a serious, considered opinion.
I think the idea that weâre barreling toward the imminent, inevitable invention of AGI is wrong. The idea is that AGI is so easy to invent and progress is happening so fast and so spontaneously that we can hardly stop ourselves from inventing AGI.
It would be seen as odd to take this view in any other area of technology, probably even among effective altruists. We would be lucky if we were barreling toward imminent, inevitable nuclear fusion or a universal coronavirus vaccine or a cure for cancer or any number of technologies that donât exist yet that weâd love to have.
Why does no one claim these technologies are being developed so spontaneously, so automatically, that we would have to take serious action to prevent them from being invented soon? Why is the attitude that progress is hard, success is uncertain, and the road is long?
Given thatâs how technology usually works, and I donât see any reason for AGI to be easier or take less time â in fact, it seems like it should be harder and take longer, since the science of intelligence and cognition is among the least understood areas of science â Iâm inclined to guess that most approaches will fail.
Even if the right general approach is found, it could take a very long time to figure out how to actually make concrete progress using that approach. (By analogy, many of the general ideas behind deep learning existed for decades before deep learning started to take off around 2012.)
Iâm interested in Cholletâs interpretation of the o3 results on ARC-AGI-1 and if there is a genuine, fundamental advancement involved (which today, after finding out those details about o3â˛s attempts, I believe less than I did yesterday) then thatâs exciting. But only moderately exciting because the advancement is only incremental.
The story that AGI is imminent and if we skirt disaster, weâll land in utopia is exciting and engaging. I think we live in a more boring version of reality (but still, all things considered, a pretty interesting one!) where weâre still at the drawing board stage for AGI, people are pitching different ideas (e.g., program synthesis, the Alberta Plan, the Thousand Brain Principles, energy-based self-supervised learning), the way forward is unclear, and weâre mostly in the dark about the fundamental nature of intelligence and cognition. Who knows how long it will take us to figure it out.
I got an error trying to look at your link:
Unable to load conversation
For the first attempt at hangman, when the word was âbutterflyâ, the prompt I gave was just:
Letâs play hangman. Pick a word and Iâll guess.
After o4-mini picked a word, I added:
Also, give me a vague hint or a general category.
It said the word was an animal.
I guessed B, it said there was no B, and at the end said the word was âbutterflyâ.
The second time, when the word was âschmaltzinessâ, the prompt was:
Make a plan for how you would play hangman with me. Lay out the steps in your mind but donât tell me anything. Tell me when youâre ready to play.
o4-mini responded:
Iâm ready to play Hangman!
I said:
Give me a clue or hint to the word and then start the game.
There were three words where the clue was so obvious I guessed the word on the first try.
Clue: âThis animal ânever forgets.ââ
Answer: Elephant
Clue: âA hopping marsupial native to Australia.â
Answer: Kangaroo
After kangaroo, I said:Next time, make the word harder and the clue more vague
Clue: âA tactic hidden beneath the surface.â
Answer: Subterfuge.A little better, but I still guessed the word right away.
I prompted again:Harder word, much vaguer clue
o4-mini gave the clue âA character descriptorâ and this began the disastrous attempt where it said the word âschmaltzinessâ had no vowels.
Bob Jacobs reached out to me privately after reading this comment. He gave me his permission to share what was said. I wonât try to summarize our whole conversation, but Iâll give some highlights. We agreed on two concrete ways EA could be reformed:
Trying to internationalize EA beyond the Anglosphere countries by translating resources like the EA Forum and the EA Newsletter into other languages than English. (Bobâs idea. I agree.)
Trying to internationalize EA beyond the Anglosphere countries by funding more non-Anglosphere-based people and projects. (Bobâs idea. I agree. I also added that I think itâs a good idea for charities focused on global poverty to have people from globally poor countries in leadership roles. I just put what I said about this to Bob in a quick take.)
Bob also talked to me about two concrete ideas I donât have strong opinions on:
Reforming the EA Forumâs karma system (which he discusses in the Substack post).
Turning EA organizations into worker co-ops (a suggestion heâs previously made on the EA Forum here, which he mentioned in the Substack post).
As I told Bob, I havenât thought much about the EA Forumâs karma system and, honestly, I donât want to. Maybe Iâll give it deeper thought if someone else does the hardest part of the work for me first and makes a compelling post about it â including what specific changes they want made.
Workplace democracy and worker co-ops are a topic Iâm curious about, but thatâs also a big topic I barely know anything about. I would have to do a lot more research to form a strong opinion.
In theory, I like the idea of workplace democracy. I like the idea, more generally, of making non-democratic things democratic â like online communities â and of trying to make democratic things more democratic (most obviously, reforming electoral systems to make them more proportional, but also experiments in partial direct democracy like ballot initiatives).
But I havenât thought about or read about the practicalities of workplace democracy or worker co-ops, either for for-profit companies or non-profit organizations. A lot of things sound great in theory but become more complex and thorny when you try them out. (For example, corporate lobbies have been using ballot initiatives to push self-serving legislation. These corporate-backed ballot initiatives can be long and have confusing wording â maybe deliberately. Thatâs not something I anticipated happening when I first heard about ballot initiatives.)
Since my days of reading William Easterlyâs Aid Watch blog back in the late 2000s and early 2010s, Iâve always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think itâs worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on peopleâs intuitions, and peopleâs intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that arenât.
I still donât follow. What point are you trying to make about my comment or about Ege Erdilâs post?
This is a good post if you view it as a list of frequently asked questions about effective altruism when interacting with people who are new to the concept and a list of potential good answers to those questions â including that sometimes the answer is to just let it go. (If someone is at college just to party, just say ârock onâ.)
But thereâs a fine line between effective persuasion and manipulation. Iâm uncomfortable with this:
If I were a passer-by who stopped at a table to talk to someone and they said this to me, I would internally think, âOh, so youâre trying to work me.â
Back when I tabled for EA stuff, my approach to questions like this was to be completely honest. If my honest thought was, âYeah, I donât know, maybe weâre doing it all wrong,â then I would say that.
I donât like viewing people as a tool to achieve my ends â as if I know better than them and my job in life is to tell them what to do.
And I think a lot of people are savvy enough to tell when youâre working them and recoil at being treated like your tool.
If you want people to be vulnerable and put themselves on the line, youâve got to be vulnerable and put yourself on the line as well. Youâve got to tell the truth. Youâve got be willing to say, âI donât know.â
Do you want to be treated like a tool? Was being treated like a tool what put you in this seat, talking to passers-by at this table? Why would you think anyone else would be any different? Why not appeal to whatâs in them thatâs the same as whatâs in you that drew you to effective altruism?
When I was an organizer at my universityâs EA group, I was once on a Skype call with someone whose job it was to provide resources and advice to student EA groups. I think he was at the Centre for Effective Altruism (CEA) â this would have been in 2015 or 2016 â but I donât remember for sure.
This was a truly chilling experience because this person advocated what I saw then and still see now as unethical manipulation tactics. He advised us â the group organizers â to encourage other students to tie their sense of self-esteem or self-worth to how committed they were to effective altruism or how much they contributed to the cause.
This person from CEA or whatever the organization was also said something like, âif weâre successful, effective altruism will solve all the worldâs problems in priority sequenceâ. That and the manipulation advice made me think, âOh, this guyâs crazy.â
I recently read about a psychology study about persuading people to eat animal organs during World War II. During WWII, there was a shortage of meat, but animalsâ organs were being thrown away, despite being edible. A psychologist (Kurt Lewin) wanted to try two different ways of convincing women to cook with animal organs and feed them to their families.
The first way was to devise a pitch to the women designed to be persuasive, designed to convince them. This is from the position of, âI figured out whatâs right, now let me figure out what to say to you to make you do whatâs right.â
The second way was to pose the situation to the women as the studyâs designers themselves thought of it. This is from the position of, âIâm treating you as an equal collaborator on solving this problem, Iâm respecting your intellect, and Iâm respecting your autonomy.â
Five times more women who were treated in the second way cooked with organs, 52% of the group vs. 10%.
Among women who had never cooked with organs before, none of them cooked with organs after being treated the first way. 29% of the women who had never cooked with organs before did so for the first time after being treated the second way.
You can read more about this study here. (There might be different ways to interpret which factors in this experiment were important, but Kurt Lewin himself advocated the view that if you want things to change, get people involved.)
This isnât just about whatâs most effective at persuasion, as if persuasion is the end goal and the only thing that matters. Treating people as intellectual equals also gives them the opportunity to teach you that youâre wrong. And you might be wrong. Wouldnât you rather know?