That quote seems taken out of context. I don’t know the passage (stagnation chapter?), but I don’t think Will was making that point in relation to what kind of skillset the EA community needs.
OllieBase
CEA is hiring for a Chief of Staff (Events Team)
10 Years of EA Global
This is a great post!
> ITN estimates sometimes consider broad versions of the problem when estimating importance and narrow versions when estimating total investment for the neglectedness factor (or otherwise exaggerate neglectedness), which inflates the overall resultsI really like this framing. It isn’t an ITN estimate, but a related claim I think I’ve seen a few times in EA spaces is:
“billions/trillions of dollars are being invested in AI development, but very few people are working on AI safety”
I think this claim:
Seems to ignore large swathes of work geared towards safety-adjacent things like robustness and reliability.
Discounts other types of AI safety “investments” (e.g., public support, regulatory efforts).
Smuggles in a version of “AI safety” that actually means something like “technical research focused on catastrophic risks motivated by a fairly specific worldview”.
I still think technical AI safety research is probably neglected, and I expect there’s an argument here that does hold up. I’d love to see a more thorough ITN on this.
By my count, barring Trajan House, it now appears that EA has officially been annexed from Oxford
Do you mean Oxford University? That could be right (though a little strong, I’m sure it has its sympathisers). Noting that Oxford is still one of the cities (towns?) with the highest density of EAs in the world. People here are also very engaged (i.e. probably work in the space).
I assumed the main reason for doing something like that is to get people engaged and actually thinking about ideas
I don’t know what motivations people usually have, but I also feel skeptical of this vague “activation” theory of change. If session leads don’t know what actions they want session participants to take, I’m not optimistic about attendees generating useful actions themselves by discussing the topic for 10 minutes in a casual no-stakes, no-rigour, no-guidance setting. I’m more optimistic if the ask is “open a doc and write things that you could do”.
I would do a meeting of people filtered for being high context and having relevant thoughts, which is much more likely to work.
Yep, the thing you’ve described here sounds promising for the reasons Alex covered :) I realise I was thinking of the conference setting in my critique here (and probably should’ve made that explicit), but I’m much more optimistic about brainstorming in small groups of people with shared context, shared goals and using something like the format you’ve described.
Please, no more group brainstorming
It’s not clear that EA funding relies on Facebook/Meta much anymore. The original tweet is deleted, and this post is 3 years old but Holden wrote of Cari and Dustin’s wealth:
I also note that META stock is not as large a part of their portfolio as some seem to assume
You could argue Facebook/Meta is what made Dustin wealthy originally, but probably not correct to say that EA funding “deeply relies” on Meta today.
Yep, I think this is right, but we don’t totally rely on these kinds of surveys!
We also conduct follow-up surveys to check what actually happens a few months after each event and unsurprisingly, you do see intentions and projects dissipate (as well as many materialising). A problem we face is that these surveys have much lower response rates.
Other more reliable evidence about the impact of EAG comes from surveys which ask people how they found impactful work (e.g., the EA Survey, Open Phil’s surveys), and EAG is cited a lot. We’ll usually turn to this kind of evidence to think about our impact, though end-of-event feedback surveys are useful for feedback about content, venue, catering, attendee interactions etc. and you can also do things like discounting reported impact in end-of-event surveys using follow-up survey data.
I’m reading “OK” as “morally permissible” rather than “not harmful”. E.g., I think it’s also “OK” to eat meat, even though I think it’s causing harm.
(Not saying you should clarify the poll, it’s clear enough and will probably produce interesting results either way!)
I thought this was a great post, thanks for sharing! I think you’re unusually productive at identifying important insights in ethics and philosophy, please keep it up!
I strongly upvoted this. I don’t endorse all your claims, but this is really easy to engage with, a very important topic and I admire how you charitably worked within the framework Shapira offered while ending up in a very different place.
Thanks. In the original quick take, you wrote “thousands of independent and technologically advanced colonies”, but here you write “hundreds of millions”.
If you think there’s a 1 in 10,000 or 1 in a million chance of any independent and technologically advanced colony creating astronomical suffering, it matters if there are thousands or millions of colonies. Maybe you think it’s more like 1 in 100, and then thousands (or more) would make it extremely likely.
probably near 100% if digital sentience is possible… it only takes one
Can you expand on this? I guess the stipulation of thousands of advanced colonies does some of the work here, but this still seems overconfident to me given how little we understand about digital sentience.
I found this moving and enlightening, thanks for sharing. Looking forward to the series!
EAGx undergraduate acceptance rate across 2024 and 2025 = ~82%
EAGx first-timer undergraduate acceptance rate across 2024 and 2025 = ~76%
Obvious caveat that if we tell lots of people that the acceptance rate is high, we might attract more people without any context on EA and the rate would go down.
(I’ve not closely checked the data)
I’d feel pretty gaslit if someone said EA was going swimmingly and unaffected by the tribulations of the last couple of years, perhaps less so if they think there’s been a bounce back after an initial decline but, you know, I’d want to see the data for that.
I agree with this fwiw, that seems fair
Thanks for adding more here :) I think that evidence is more persuasive, though still reads a little vibe-y and data-free, and involves reading intention into some actions that might not be there.
Ever since November 2022, the EA movement has only seemed to know criticism and scandal. Some have even gone so far to declare that EA is dead or dying,[1] or no longer worth standing behind,[2] or otherwise disassociate themselves from the movement even if outside observers would clearly identify them as being ‘EA’.[3] This negative environment that EA finds itself in is, I think, indicative of its state as a social movement in decline.
I don’t think the claim “EA is in decline” is well-defended in this post. You link to a few naysayers here, but I don’t think that’s good evidence. “EA is in decline” is also self-fulfilling—it might decline if everyone’s saying it’s declining—so I expect some people say this because they want it to happen, not because they’ve reviewed the evidence and have concluded this is what’s happening.
Colleagues of mine can pull together more evidence against, but as two examples that are salient to me:
EA Global London 2025 is on track to be the biggest EA conference ever.
We expect to welcome more people to EA events (EAG, EAGx, EA Summits) this year than ever before.
I find that hard to square with “EA is in decline”. To be clear, I think the claim might be true, but it’s an important enough question that it deserves some more thoughtful study and data, rather than vibes on Twitter.
It looks the post in question is now tagged ‘Community’.