Community Organiser for EA UK- https://www.effectivealtruism.uk
Monthly Overload of EA—https://moea.substack.com/
Community Organiser for EA UK- https://www.effectivealtruism.uk
Monthly Overload of EA—https://moea.substack.com/
In 2015, one survey found 44% of the American public would consider AI an existential threat. In February 2023 it was 55%.
I’ve written about this idea before FTX and think that FTX is a minor influence compared to the increased interest in AI risk.
My original reasoning was that AI safety is a separate field but doesn’t really have much movement building work being put into it outside of EA/longtermism/x-risk framed activities.
Another reason why AI takes up a lot of EA space, is that there aren’t many other places to go to discuss these topics, which is bad for the growth of AI safety if it’s hidden behind donating 10% and going vegan and bad for EA if it gets overcrowded by something that should have it’s own institutions/events/etc.
If the definition of being more engaged includes going to EAG and being a member of a group, aren’t some of these results a bit circular?
EA isn’t a political party but I still think it’s an issue if the aims of the keenest members diverges from the original aims of the movement, especially if the barrier to entry to be a member is quite low compared to being in an EA governance position. I would worry that the people who would bother to vote would have much less understanding of what the strategic situation is than the people who are working full time.
Maybe we have had different experiences, I would say that the people who turn up to more events are usually more interested in the social side of EA. Also there are lot of people in the UK who want to have impact and have a high interest in EA but don’t come to events and wouldn’t want to pay to be a member (or even sign up as a member if it was free).
I think people can still hold organisations to account and follow the money, even if they aren’t members, and this already happens in EA, with lots of critiques of different organisations and individuals.
I think one large disadvantage of a membership association is that it will usually consist of the most interested people, or the people most interested in the social aspect of EA. This may not always correlate with the people who could have the most impact, and creates a definitive in and out.
I’d be worried about members voting for activities that benefit them the most rather than the ultimate beneficiaries (global poor, animals, future beings).
Olivia Fox Cabane has an alt protein industry landscape map.
A separate organisation just for CBGs would have been useful too rather than a lot of one and two person teams with constant turnover.
I thought about this briefly a few months ago and came up with these ideas.
CEA—incubate CBG groups as team members until they are registered as separate organisations with their own operations staff
CEA but for professional EA network building (EA Consulting network, High Impact Engineers, Hi-Med, etc). They are even more isolated than CBGs which have some support from CEA
Rethink Priorities—One of the incubated orgs could do similar work to EV Ops (which is maybe what the special projects team is doing already, but it might be good to have something more separate from RP, or a cause specific support org (animal advocacy/AI safety, biosecurity)
EV Ops—Spin out 80k/GWWC to increase capacity for other smaller orgs
Open Phil—Some of their programs might work better with project managers rather than individuals getting grants (e.g. the century fellowship)
Also looking at local groups, there is some coordination on the groups slack and some retreats but there is still a lot of duplication and a high rate of turnover which limits any sustained institutional knowledge.
I didn’t vote but there has been discussion of issues in richer countries that received votes but the author pointed out how it fit into the context of effective altruism.
There have also been posts about mass media interventions but they generally refer to stronger evidence for their effectiveness.
Thanks for diving into the data David, I think a lot of this might hinge on the ‘highly engaged EAs’ metric and how useful that is for determining impact vs how much someone has an interest in EA.
Are you also able to see if there are differences between different types of local groups (National/City/University/interest)?
I would go further and say that more people are interested in specific areas like AI safety and biosecurity than the general framing of x-risks. Especially senior professionals that have worked in AI/bio careers.
There is value for some people to be working on x-risk prioritisation but that would be a much smaller subset than the eventual sizes of the cause specific fields.
You mention this in your counterarguments but I think that it should be emphasised more.
Also Matt Clifford has written regularly about wanting to encourage more entrepreneurship and increasing growth
Wave is a good example.
When I started community building I would see the 20 people who turned up most regularly or had regular conversations with and I would focus on how I could help them improve their impact, often in relatively small ways.
Over time I realised that some of the people that were potentially having the biggest impact weren’t turning up to events regularly, maybe we just had one conversation in four years, but they were able to shift into more impactful careers. Partially because there were many more people who I had 1 chat with than there were people I had 5 chats with, but also the people who are more experienced/busy with work have less time to keep on turning up to EA social events, and they often already had social communities they were a part of.
It also would be surprising/suspicious if the actions that make members the happiest also happened to be the best solution for allocating talent to problems.
I guess the overlap is quite high for myself between ‘impact’ and ‘impact as a community builder’.
Thanks for writing this post, I’ve been thinking about this framing recently. Although more because I felt like I was member-first when I started community building and now I am much more cause-first when I’m thinking about how to have the most impact.
I don’t agree with some of the categorisations in the table and think there are quite a few that don’t fall on the cause/member axis. For example you could have member first outreach that is highly deferential (GiveWell suggestions) and cause-first outreach that brings together very different people that disagree with EA.
Also when you say the downsides of cause-first are that it led to lock in or lack of diversification I feel like those are more likely due to earlier on member-first focus in EA.
I thought this post has a good overview.
I’ve written a bit about this here and think that they would both be better off if they were more distinct.
As AI safety has grown over the last few years there may have been missed growth opportunities from not having a larger separated identity.
I spoke to someone at EAG London 2023 who didn’t realise that AI safety would get discussed at EAG until someone suggested they should go after doing an AI safety fellowship. There are probably many examples of people with an interest in emerging tech risks who would have got more involved at an earlier time if they’d been presented with those options at the beginning.