Thinking, writing, and tweeting from Berkeley California. Previously, I ran programs at the Institute for Law & AI, worked on the one-on-one advising team at 80,000 Hours in London and as a patent litigator at Sidley Austin in Chicago.
Mjreard
My list is very similar to yours. I believe items 1, 2, 3, 4, and 5 have already been achieved to substantial degrees and we continue to see progress in the relevant areas on a quarterly basis. I don’t know about the status of 6.
For clarity on item 1, AI company revenues in 2025 are on track to cover 2024 costs, so on a product basis, AI models are profitable; it’s the cost of new models that pull annual figures into the red. I think this will stop being true soon, but that’s my speculation, not evidence, so I remain open that scaling will continue to make progress towards AGI, potentially soon.
Your picture of EA work on AGI preparation is inaccurate to the extent I don’t think you made a serious effort to understand the space you’re criticizing. Most of the work looks like METR benchmarking, model card/RSP policy (companies should test new models for dangerous capabilities a propose mitigations/make safety cases), mech interp, compute monitoring/export controls research, and trying to test for undesirable behavior in current models.
Other people do make forecasts that rely on philosophical priors, but those forecasts are extrapolating and responding to the evidence being generated. You’re welcome to argue that their priors are wrong or that they’re overconfident, but comparing this to preparing for an alien invasion based on Oumuamua is bad faith. We understand the physics of space travel well enough to confidently put a very low prior on alien invasion. One thing basically everyone in the AI debate agrees on is that we do not understand where the limits of progress are as data reflecting continued progress continues to flow.
I agree there’s logical space for something less than less than AGI making the investments rational, but I think the gap between that and full AGI is pretty small. Peculiarity of my own world model though, so not something to bank on.
My interpretation of the survey responses is selecting “unlikely” when there are also “not sure” and “very unlikely” options suggests substantial probability (i.e. > 10%) on the part of the respondents who say “unlikely,” or “don’t know.” Reasonable uncertainty is all you need to justify work on something so important if-true and the cited survey seems to provide that.
I directionally agree that EAs are overestimating the imminence of AGI and will incur some credibility costs, but the bits of circumstantial evidence you present here don’t warrant the confidence you express. 76% of experts saying it’s “unlikely” the current paradigm will lead to AGI leaves ample room for a majority thinking there’s a 10%+ chance it will, which is more than enough to justify EA efforts here.
And most of what EAs are working on is determining whether we’re in that world and what practical steps you can take to safeguard value given what we know. It’s premature to declare case closed when the markets and the field are still mostly against you (at the 10% threshold).
I wish EA were a bigger and broader movement such that we could do more hedging, but given that you only have a few hundred people and a few $100m/yr, it’s reasonable to stake that on something this potentially important that no one else is doing effective work on.
I would like to bring back more of the pre-ChatGPT disposition where people were more comfortable emphasizing their uncertainty, but standing by the expected value of AI safety work. I’m also open to the idea that that modesty too heavily burdens our ability to have impact in the 10%+ of worlds where it really matters.
Yes, but this shows your claim here is actually just empirical skepticism about how general and how capable AI systems will be.
It is true that loose talk of AIs being “[merely] better than” all humans at all tasks does not imply doom, but the “merely” part is not what doomers believe.
If AIs are a perfect substitute for humans with lower absolute costs of production – where “costs” mean the physical resources needed to keep a flesh-and-blood human alive and productive – humans will have a comparative advantage only in theory. In practice, it would make more sense to get rid of the humans and use the inputs that would have sustained them to produce more AI labor.
Yep. I agree there’s too great a default towards optimism and that some people are wasting their time as a result.
Based on the numbers I saw when I worked on hiring, I’d say something like 125-200 of the GovAI applicants were determined to work in AIS, properly conceived, as the primary thing they wanted to do. FIG is harder to guess, but I wouldn’t be surprised is they just got added to some popular lists of random/”easy” internship opportunities and got flooded with totally irrelevant apps.
No on appalled; No on oversaturated; Yes on being clear that AIS projects are looking for the ultra-talented, but be mindful how hard it is to be well calibrated on who is ultra-talented, including yourself.
In my experience, a sizable majority of applicants in big figures like these are both plainly unqualified and don’t understand the mission of AIS orgs. You shouldn’t assume most or even many resemble you or other people on the EA Forum.
Everyone should be open to not being a fit for many projects or open to the idea that better candidates are out there. I wish for the world’s sake that I become unhirable!
Interesting exchange there. I agree that the vision should be to have EA so in-the-water that most people don’t realize they’re doing “Effective Altruism.” I’m very uncertain about how you get from here to there. I doubt it makes sense to shrink or downplay the existing EA community. My intuition is you want to scale up the memes at every level. Right now we’re selling everything to buy more AI safety memes. It’s going okay, but it’s risky and I’m conscious of the costs to everything else.
Specifically inspired by Mechanize’s piece on technological determinism. It seems overstated, but I wonder what the altruistic thing to do would be if they were right.
I don’t think it’s wise to redefine what should count as an ultra-processed food to suit your agenda. When people talk about UPFs, they’re clearly concerned about chemical additives like emulsifiers, stabilizers, colorings, and artificial sweeteners. As unlike wild chickens as today’s broiler chickens are, they don’t contain those things and don’t pose the same health risks that those things do. In that light, I found this discrediting:
For now I’ll just note that people are confused about processing — and the “experts” are part of the problem. That same 2024 survey of 10,000 Europeans, run by the EU- and FAO-backed EIT Food group, categorized chicken as “unprocessed” and plant-based chicken as “ultra-processed.” But both products come from processing plants. And both start as soybeans, corn, and additives. The main difference is that the chicken version includes an extra layer of “unnatural” processing — inside the stomach of a Franken-chicken confined in an animal factory.
I hope others don’t repeat it. It may well be that the general concept of UPFs is nonsense — and if so, people should argue for that, or that Beyond/Impossible doesn’t qualify under a reasonable definition — but people know what they have in mind when they use the term and will know you’re not respecting their concerns when you try to be clever like this.
As Huw says, the video comes first. I think this puts almost anything you’d be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn’t chosen at random. People like charismatic megafauna.
The relationship between how fun your movement is for participants and its overall effectiveness is non-linear. You need to offer selfish rewards for (most) people to join. Offer too few selfish rewards and you’re going to have a small, ineffective movement no matter how good your ideas are or how qualitatively good the few people you do have are.
I agree that entertaining EAs has no terminal value, but it has huge instrumental value. Few people seem to be trying to make EA interesting and fun on that score. People’s main experience with EA seems to be getting pitched on jobs and then not getting them. Not fun!
There’s real work to be done getting people excited to earn to give and spread the ideas in their spare time. Done well enough, it can even improve your direct work talent pool!
I’m surprised at the “on during presentations, off during 1:1s” advice. My intuition is the opposite because of the volume of droplets and aerosols directed right at you by a speaking person in a 1:1. That seems more dangerous than sitting in a quiet room with many people just lightly breathing through their noses not directed at anyone. If you do all your 1:1s outside, I can see how this flips, but maybe you should say the recommendation depends on that.
This is assuming you go to 3-4 presentations and have ~20 1:1s.
The real solution is of course for ASB to provide us with 500 of those chlorine misters.
I keep coming back to Yeats on this topic:
“The best lack all conviction while the worst are filled with passionate intensity”
I think the exceptionally truth-seeking, analytical, and quantitative nature of EA is virtuous, but those virtues too easily translate into a culture of timidness if you don’t consciously promote boldness.
Conceptually, Julia Galef talks about pairing social confidence with with epistemic humility in the Scout Mindset. It doesn’t come naturally, but it is possible and valuable when done well.
Right now I think Nicholas Decker is a great embodiment of this ethos. He says what he thinks without fear or social hesitation. He’s not always right and he flagrantly runs afoul of what’s considered socially acceptable or what a PR consultant would tell him to do, but there’s no mistaking his good-natured-ness and self-assurance that he’s on the right side of history, because generally *he is.* He doesn’t make the perfect the enemy of the good or excessively play it safe to avoid criticism.
A “bring on the haters” attitude is in fact more welcoming and trust-inducing than words carefully crafted to minimize criticism because it defeats the concern that you’re hiding something. And come on friends, the stuff you’re “hiding” in EA’s case – veganism, shrimp, future generations, etc. – is nothing to be ashamed of. And when you soft roll it, you’re endorsing the social sanction on these things as weird. Fuck that. Hit back. With grace. And pride. Fire the PR consultant in your head.
Amsterdam seems like the best one to look into given your background. EA Netherlands has long seemed strong to me and they regularly hold EAGx’s there. Generally Prague and Berlin will be quite strong too.
I’d be doing less good with my life if I hadn’t heard of effective altruism
Subjectively, I think I have done a lot more good because of EA, but have doubts around the potential negative sign of AI safety work (have we contributed to AI hype? Delayed beneficial AI? Else?) and cluelessness. On cluelessness, my alternative was a very causally-minimal life close to ~never leaving the house. I still don’t leave the house, but lots of what I send out of the house over the internet is more effectual than it would be.
Re commuter schools, it seems like the argument is just as strong in principle because the would-be organizer’s opportunity cost is proportionately lower. In practice, if that organizer is reading this post, there’s a good chance that they’re a a big-enough fish in a small-enough pond that they should focus on their individual development, so your point might hold nonetheless.
Maybe something that spans all the cruxes here is that there are just very low effort ways to run a group and capture a big part of the value. If no one else is doing it, it’s just very worth it to text the 3-4 interested people you know and substitute a group meeting for a general hang out once a month.
4-6 seem like compelling reasons to discount the intersection of AI and animals work (which is what this post is addressing), because AI won’t be changing what’s important for animals very much in those scenarios. I don’t think the post makes any comment on the value of current, conventional animal welfare work in absolute terms.
A 10% chance of transformative AI this decade justifies current EA efforts to make AI go well. That includes the opportunity costs of that money not going to other things in the 90% worlds. Spending money on e.g. nuclear disarmament instead of AI also implies harm in the 10% of worlds where TAI was coming. Just calculating the expected vale of each accounts for both of these costs.
It’s also important to understand that Hendrycks and Yudkowsky were simply describing/predicting the geopolitical equilibrium that follows from their strategies, not independently advocating for the airstrikes or sabotage. Leopold is a more ambiguous case, but even he says that the race is already the reality, not something he prefers independently. I also think very few “EA” dollars are going to any of these groups/individuals.