Formerly Ollie Base (until August 2025)
OllieRodriguez
We’re growing: CEA is increasing its font size
Survey of AI safety leaders on x-risk, AGI timelines, and resource allocation (Feb 2026)
I take “AGI goes well” to imply a wealthy and technologically advanced society. I think that could mean:
- Very cheap and delicious meat alternatives.
- Factory farming waning as it reaches inefficiencies and bottlenecks, not able to compete with the above.
- More demand for higher-welfare options like free-range and local produce.
But it also seems possible that we “lock in” factory farming and scale it further and that AGI adopts speciesist views.
Very uncertain, I don’t find myself strongly disagreeing with claims across the spectrum.
I think most answers here are missing what seems the most likely explanation to me: the people who are motivated by EA principles to engage with politics are not public about their motivations or affiliations with EA. Not just because the EA brand is disliked by some political groups, but it seems generally wise to avoid having strong idealogical identities in politics beyond motivations like “do better for my constituents”.
Cool! For a brief second, I thought this post was going to be an extremely long list of the trillions of senting beings my actions could influence, but this is much more digestable.
Quick take:
Yes, the sample I looked at did have some very expensive retreats, and I think you can run much cheaper ones. Note though that EAGx events / conferences can also be much cheaper so you should adjust on both sides (I think I had a cost-inflated sample due to high spending on community building in 2022)
I still think outcomes-per-person at retreats often don’t seem that different to larger events, so returns to scale are often real. i.e. focus on cost-per-attendee. If your theory of change involves helping lots of people you don’t know well find careers in EA/AIS, I think going bigger is usually a good move.
Retreats are definitely a useful intervention, especially when you have a smaller group whose needs/goals you know well (e.g. more involved community members looking to go deeper on topics).
I had a reminder to check back on this. I had a quick scan, and I don’t think this happened. Joe’s post probably meets the bar, and does suggest it’s still a contentious issue, but I can’t find 9+ more so not as contentious as you predicted :)
Awesome line-up, nice work!
Starting afresh seems like the right move here, and I think it’s super commendable to share that you’re re-committing.
I have the same problem when it comes to end of year donations, and that prompted me to move to monthly donations (even if the idealized version of me would save accordingly and then make bigger donations more thoughtfully EOY).
Also:
In total, I’ve given about half of what I pledged since 2016.
This is still a lot of money, and a lot of good. Giving 5% of your income to charity for almost 10 years is a hugely generous and selfless thing to do :)
Cool! FYI when I open your home page on a large monitor, the “Log In” and “Blog” buttons overlap with each other (fine on small screens).
You might still want more on the margin, but I think this already happens a fair amount in EA:
Fellowships are very common—the amount of freedom seems to vary but many (e.g., GovAI, historically FHI) involve giving fellows several months or up to a year to explore topics with limited oversight.
EA funders often give unrestricted funding, often to individuals to pursue projects (see e.g. EA Funds grants, ACX grants, Manifund).
Compared to other social/intellectual movements, EA is (still) well-funded. I expect many other non-profits / activist groups / academic institutions would be amazed at how many people in EA are paid well to think and write about relevant topics with a lot of freedom.
Once you register for the event, you’ll be invited to Swapcard, our event platform which has all of this information :) DM me if you have any issues.
+1. Bumping my quick take listing where some of the people who participated in my EA uni group while I was there ended up (this is now considerably out of date, but AFAIK these people remain on paths that seem super impactful)
CEA is hiring for a Chief of Staff (Events Team)
It looks the post in question is now tagged ‘Community’.
10 Years of EA Global
That quote seems taken out of context. I don’t know the passage (stagnation chapter?), but I don’t think Will was making that point in relation to what kind of skillset the EA community needs.
This is a great post!
> ITN estimates sometimes consider broad versions of the problem when estimating importance and narrow versions when estimating total investment for the neglectedness factor (or otherwise exaggerate neglectedness), which inflates the overall resultsI really like this framing. It isn’t an ITN estimate, but a related claim I think I’ve seen a few times in EA spaces is:
“billions/trillions of dollars are being invested in AI development, but very few people are working on AI safety”
I think this claim:
Seems to ignore large swathes of work geared towards safety-adjacent things like robustness and reliability.
Discounts other types of AI safety “investments” (e.g., public support, regulatory efforts).
Smuggles in a version of “AI safety” that actually means something like “technical research focused on catastrophic risks motivated by a fairly specific worldview”.
I still think technical AI safety research is probably neglected, and I expect there’s an argument here that does hold up. I’d love to see a more thorough ITN on this.
Thanks Jan, I appreciate the pushback.
As an event focused on x-risk, yes, I think this is fair.
It’s true that:
The agenda featured some talks emphasising risks from AI-enabled human takeover.
Some of the most popular memos also emphasised this risk category.
Some people took the survey after reading the agenda and the memos.
But I don’t think attendees were as strongly influenced as you seem to imply:
We highlighted some memos to read at the beginning, but soon after launching the memo platform, we prioritized memos by votes from attendees. Memos making the case for more emphasis on risks from aligned AI were heavily upvoted, and some memos that we highlighted from the beginning received fewer upvotes.
The survey was in part motivated by disagreements I’d heard about how the AIS community was allocating resources. While I’m sure some attendees were influenced by information they recently encountered, many will have thought about these questions in advance of the survey.
I don’t have the full data, but I think it’s likely that many attendees completed the survey before engaging with the memos and before the full agenda was published.
I do think you’re pointing to a real effect to be aware of, and thanks for pointing it out, but I don’t think it’s as significant as you make out (though maybe you don’t think it’s super significant).
I think the areas of broad consensus accurately (if roughly) reflect the data we have here and what we saw in memos. FWIW, my overall takeaway from running this survey is that leaders and key thinkers have a wide range of views and I think this post captures and conveys this.