I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the ‘narrow EA’ strategy is a mistake because there’s a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it’s also important that ‘the getting our shit together’ has broad societal participation.
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like “the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism” then I wouldn’t be surprised to see articles saying “How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans”. Doesn’t mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I’d be surprised if there was a way around it. I also think it’s notable how much press there is that agrees with AI X-risk concerns; it’s not like there’s a consensus in the media that it should be dismissed.
+1; except that I would say we should expect to see more, and more high-profile.
AI xrisk is now moving from “weird idea that some academics and oddballs buy into” to “topic which is influencing and motivating significant policy interventions”, including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).
The former, for a lot of people (e.g. folks in AI/CS who didn’t ‘buy’ xrisk) was a minor annoyance. The latter is something that will concern them—either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.
I would think it’s reasonable to anticipate more of this.
either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.
Why? Maybe I’m being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.
Furthermore, there’s still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
the piece has an underlying narrative of a covert group exercising undue influence over the government
My honest perspective is if you’re an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you’re a small group, they’ll call you a conspiracy, and if you’re a large group, they’ll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn’t expect scrutiny to simply focus on the substance either way.
I agree that we should have more of an outside game in addition to an inside game, but I’d also note that efforts at developing an outside game could similarly face harsh criticism (e.g., “appealing to the base instincts of random individuals, taking advantage of these individuals’ confusion on the topic, to make up for their own lack of support from actual experts”).
Maybe I’m in a bubble, but I don’t recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as “uninformed mobs”. This article from the Daily Mail is about as close as it gets, but I think I’d rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.
Ultimately, I don’t think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.
And again, I know I sound like a broken record, but there’s also the issue of how appropriate it is for us to try to guide society without broader participation.
I don’t recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as “uninformed mobs”
So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don’t personally find conservative media to be “reputable,” but (at least in the US, perhaps less so in the UK) around half the power will generally be held by conservatives (and perhapsmore than half going forward).
For sure progressive publications will be more positive, and I don’t think conservative media ≠ reputable.
When I say “reputable publications” I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as “uninformed mobs”.
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn’t be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.)
I’d also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it’s not just a one-directional phenomena.
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define ‘reputable’ as ‘those organisations most trusted by the general public’, which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov’s method is flawed? That’s plausible.
But we’ve fallen into a bit of a digression here. As I see it, there are four cruxes:
Does a focus on the inside game make us vulnerable to the criticism that we’re a part of a conspiracy?
For me, yes.
Does this have the potential to undermine our efforts?
For me, yes.
If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts?
For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population).
Is it unquestionably OK to try to guide society without broader societal participation?
For me, no.
I think our biggest disagreement is with 3. I think it’s possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we’re a long, long way from that happening. You seem to think we’re much closer, is that correct? Could you explain why?
I don’t know where you stand on 4.
P.S. I’m enjoying this discussion, thanks for taking the time!
I agree and this is why I’m in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn’t claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but “EA” can’t—it isn’t a person, it’s a network.
If you have a lot of influence, articles like this are inevitable.
EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That’s where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true—this is a very hard dynamic to manage).
The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we’ll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
EA is also vulnerable to criticism as an elitist movement, and its interconnection with the AI industry will make it seem biased.
EA is not a unitary actor and EAs will often have opposing views on things. This makes any sort of reputation management quite challenging.
The most natural precedent to EA are the Freemasons and people hated them.
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don’t give a damn about EA but Politico is already publishing scathing pieces.
I don’t think reputation management is as hard as is often supposed in EA. I think it’s just it hasn’t been prioritised much until recently (e.g., CEA didn’t have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don’t have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it’s in our short-term interests) so I’m a bit pessimistic, though I agree it is a good idea to try. What’s a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don’t think there’s much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she’ll use her influence as strongly as possible in the ‘AI Ethics’ community.
Seth Lazar also seems intractably anti-EA. It’s annoying how much of this dialogue happens on Twitter/X, especially since it’s very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven’t seen where the Safety->Ethics hostility has been, I’ve really only ever seen the reverse, but of course I’m 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think there’s a strong anti-EA sentiment amongst the generally left-wing/critical-aligned parts of the ‘AI Ethics’ fields, and they aren’t taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and we’re in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
just really haven’t seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment. But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying. I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They’ve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can’t do fizzbuzz or know what a transformer is, thus they’ll just say sentences about how AI can’t do things and there’s a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and “Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.”
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy “there are lots of good sensible AI ethics people with good ideas, we should co-operate with them”. I don’t actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It’s only the idea that “be co-operative” will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I’m a bit skeptical of. My claim is not “AI ethics bad”, but “you are unlikely to be able to persuade the most AI hostile figures within AI ethics”.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
I know you’re probably extremely busy, but if you’d like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I’m significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
I expect many communities would agree on working to restrict Big Tech’s use of AI to consolidate power. List of quotes from different communities here.
EA isn’t unitary so people should individually just try cooperating with them on stuff and being like “actually you’re right and AIs not being racist is important” or should try to make inroads on the actors’ strike/writer’s strike AI issues. Generally saying “hey I think you are right” is usually fairly ingratiating.
For what it’s worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
People should say that things are right when they agree with them, even if there wasn’t strategic purpose in doing so.
I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry.
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I’ll note that I stopped reading the linked article after “Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do “narrow EA” or “global EA”.
I agree that’s a good argument why that article is a bigger deal than it seems, but I’d still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
I’ve heard versions of the claim multiple times, including from people i’d expect to know better, so having the survey data to back it up might be helpful even if we’re confident we know. the answer.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI “at all costs” to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA’s professed beliefs wouldn’t be entirely convincing to some people given the close connections between EAs and rationalists in AI.
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you’d expect to know better, why they believe what they believe?
I think that EA has made the correct choice in deciding to focus on inside game. As indicated by the article, it seems like we’ve been incredibly successful at it. I agree that in an ideal world, we would save humanity by playing the outside game, but I feel that the current inside game is increasing our odds by enough that I feel very comfortable with our decision to promote it.
I agree that it’s worth thinking about the potential for this success to result in a backlash, though surveys seem to indicate more concern among the public about AI risks than I had expected, so I’m not especially worried about there being a significant public backlash.
Nonetheless, it doesn’t make sense to take unnecessary risks, so there are a few things we should do: • I’d love to see EA develop more high-quality media properties like the 80k podcast, Rob Miles or Rationalist Animations, but very few people have the skills. • Books combined with media releases and appearances on podcasts are one way in which we can attempt to increase our support among the public. • I think it makes sense to try our best to avoid polarisation. If it seems that one side of the political spectrum is becoming hostile, then it would make sense to initiate some concerted outreach to it.
Thanks for your comment Chris! Although it appears contradictory? In the first half, you say we’ve made the right choice by focusing on the inside game, but in the second half, you suggest we expend more resources on outside game interventions.
Is your overall take that we should mostly do inside game stuff, but that perhaps we’re due a slight reallocation in the direction of the outside game?
Exactly. I think EA should mostly focus on inside game, but that, as a lesser priority, we should take steps to mitigate the risks associated with this.
I think there’s a good chance we broadly agree. If you had to put a number on it, what would you say is our current percentage split between inside game and outside game? And what would your new ideal split be?
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
If what you and Jan say is true (not saying I doubt you, it doesn’t mesh with my experiences being an open EA but then I don’t live in the policy-world) then this does need to be higher up the EA priority list.
I’d strongly, strongly advise against ‘hiding’ beliefs here. If there is already a hostile set of opponents actively looking to discredit EA and EA-links then we need to be a lot more pro-active in countering incorrect framings of EA and being more assertive to opponents who think EA is worth discrediting.
I think one low hanging fruit is publicly dissociating from Elon Musk. He often gets brought up even though he’s not part of the community. There’s also very legitimate EA-/longtermism-based criticism of him available
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
I’ve heard the same thing from US sources about the US policy space, to the extent that important information doesn’t get shared on the EA Forum because it would associate it with EA.
I think events are underrated in EA community building.
I have heard many people argue against organising relatively simple events such as, ‘get a venue, get a speaker, invite people’. I think the early success of the Tien Procent Club in the Netherlands should make people doubt that advice.
Why? Well, the first thing to mention is that they simply get great attendance, and their attendees are not typical EAs. I think their biggest so far has been 400, and the typical attendee is a professional in their 30s or 40s. It also does an amazing job of generating buzz. For example, suppose you’ve got a journalist writing an article about your community. In that case, it’s pretty cool if you can invite them to an event with hundreds of regular people in attendance.
Now, of course, attendance doesn’t translate to impact. However, I think we can see the early signs of people actually changing their behaviour.
For example, running a quick check on GWWC’s referral dashboard, I can see four pledges that refer to the Tien Procent Club (2 trial, 2 full). Based on GWWC’s March 2023 impact evaluation, they can therefore self-attribute ~$44k of 2022-equivalent donations to high-impact funding opportunities.
This is despite the fact they started less than two years ago and don’t have any funding other than what they have provided themselves or raised through selling tickets.
What’s more, it’s beginning to look like their formula works in different contexts. They started in Amsterdam, but since then they’ve seeded new organising teams elsewhere in the Netherlands, and the teams in Rotterdam and Utrecht have successfully organised their first events.
One caveat to all of this is that they received quite a bit of promotion from Rutger Bregman, a very prominent writer in the Netherlands. I know people are going to experiment with the TPC format abroad. I assume they won’t have a similar ambassador. It will therefore be interesting to see if the formula still works without such a resource.
In the meantime, my current takeaways are: get an endorsement from someone like Bregman + if your target audience is non-students who aren’t already EAs, put on events that only require shallow engagement but are good fun + focus on doing what you’re good at (e.g., they only do large events, and they only do them once every 3 months).
I have heard many people argue against organising relatively simple events
I’m actually very surprised to hear this. What does the “common view” presume then?
Personally, I see 3 tiers of events: 1. Any casual, low-commitment, low stakes events 2. Big EA conferences that I find quite valuable for meeting lots of people intentionally and socially 3. Professionally-focused events (research fellowships, incubators etc)
I think “simple” events like 1 are great for socialising and meeting new people. While 2 and 3 get more done, I don’t think the community would feel as welcoming if the only events occurring were ones where you had to be fully professional.
Sometimes I still want to interact with EAs, but without the expectation of “meeting right” or “networking”. I suspect this applies especially to introverts and beginners. Even just going to a conference with the expectation of booking lots of 1-on-1s vs just chilling feels very different.
Yeah, that’s a good categorisation, although often 3 is less ‘professionally focused events’ and more ‘events for highly committed EAs’.
I think the common EA CB view is captured in the below quote (my own italics), which is taken from the CEA’s Group Resource Centre’s page ‘How do EA groups produce impact?’.
We believe a central obstacle to progress on the world’s most pressing problems is a shortage of talented people taking significant actions. Therefore, we are particularly excited about people pivoting into high-impact careers. This is not to say that we don’t think spending time on funding and sharing EA ideas is not positive, just that perhaps these things are not as neglected as providing platforms for talented people to take significant actions. Some examples could be changing career plans, founding organisations or start-ups, or assisting those already producing impact. For more examples, see Ollie Base’s EA forum post about what people who were part of the EA Warwick group are doing now.
That isn’t to say groups should only optimize for career changes (and we don’t advocate for trying to push people into specific careers); it’s one useful frame for understanding your group’s impact. This also suggests that you should focus time and effort on deeply engaging the most committed members rather than just shifting some choices of many people.
I think this is broadly right. But I think EA CBs often overcorrect in this direction and, as a result, neglect events that aim for broad reach but shallow engagement.
On CB, my views that are half informed by EA CBs and half personal opinions:
Very casual events—If you are holding no events for a long time and don’t have much capacity, just hold low-stakes casual events and follow-up with high-engaged people afterwards. Highly-engaged people tend to show up/follow up several times after learning about EA anyway. 80-90% of the time, I think having some casual events every few weeks is better than no casual events.
Bigger events—Try to direct highly-engaged people to bigger and/or more specialised events. The EA community is big and diverse, and letting people know other events exist lets them self-select better. When I first explored beyond EA Singapore, I spent 2 months straight learning about every EA org and resource in existence, individually reviewing all the Swapcard profiles at every EAG. That was absolutely worth the effort, IMO.[1]
1-on-1s are probably still important − 1-on-1s with someone of very similar interest areas or career trajectories are the most valuable experiences in EA, in my opinion. Only 10% of 1-on-1s are like this, but they more than make up for the 90% that don’t really go anywhere. As much as I try to optimise, this seems to be a numbers game of just finding and meeting a lot of potentially interesting people.[2]
Online resources—For highly-engaged EAs, important information should be online-first. I’m of the opinion that highly-engaged/agentic new EAs tend to read a lot online, and can gain >80% of the same field-specific knowledge reading on their own. This especially holds true in AI Safety, which is like … code and research that’s all publicly available short of frontier models. I think events should be for casual socials, intentional networking and accountability+complex coordination (basically, coworkers).
If you want the 80⁄20 for AI Safety, check out aisafety.training, aisafety.world, check EA Forum, Lesswrong and Alignment Forum once a week (~1 hour/week), check 80k job board and EA Opportunities Board once a week (~20 minutes/week), review forum tags for things like prizes, job opportunities and research programs to see what programs were run last year that will be run again this year.
It is possible to capture all open opportunities this way. The rest is just researching interesting orgs, seeing which ones you vibe with and engaging with them. This is just for AI Safety, for other cause areas I’d expect the same amount of time spent passively checking.
My personal view is people should slightly prioritise “potentially interesting” over “potentially useful”. The few times I’ve met EAs just because they’re high-ranking, the conversation is usually generic and could have been had by Googling and emailing/texting.
When I first started at EA Netherlands I was explicitly advised against it, and more generally it seems to be ‘in the air’. For example:
The groups resource hub says “This also suggests that you should focus time and effort on deeply engaging the most committed members rather than just shifting some choices of many people.”
Kuhan’s widely shared post on ‘lessons from running Stanford EA’ has in its summary “Focus on retention and deep engagement over shallow engagement”
CEA’s Groups Team’s post on ‘advice we give to new university organiser’ says “We think it’s good to do broad recruiting at the beginning of the semester, as with any club or activity. But beyond this big push of raising awareness, we think it’s most often better to pay more attention to people who seem very interested in—and willing to take significant action based on—EA ideas”
Writing this out has made me realise something. I think this advice makes more sense in a university context, where students are time-rich and are going through an intense social experience, but it makes less sense when you’re targeting professionals. I suspect it’s still ‘in the air’ because, historically, CEA has been very good at targeting students.
As a consequence, very few national orgs (including ourselves) organise TPC-esque events (broad reach, low engagement). For us, this is because our strategy is to focus on supporting local organisers in organising their own events (the theory is that then we can have lots of events without having to organise all of them ourselves). But I don’t think that’s the case for other national organisations (other national CBs, please jump in and correct me if I’m wrong, e.g., I know @lynn at EA UK has been organising career talks).
Ultimately, I guess what I’m saying is what I’ve said elsewhere: you need a blend of ‘mobilising’ (broad reach, low engagement) and ‘organising’ (narrow reach, high engagement), and I think EA groups often do too much organising.
I guess I don’t interpret those bullets as “arguing against organising simple events” but rather “put your effort into supporting more engaged people” and that could even be consistent with running simple events, since it means less time on broad outreach compared to e.g. a high-effort welcoming event.
I agree with the first part of your last sentence (the blend), I don’t know how EA groups spend their time.
Hmm, yeah, but by arguing for “put your effort into supporting more engaged people” you’re effectively arguing against “relatively large events that require relatively shallow engagement”. I think that’s the mistake. I think it should be an even blend of the two.
EA should take seriously its shift from a lifestyle movement to a social movement.
The debate surrounding EA and its classification has always been a lively one. Is it a movement? A philosophy? A question? An ideology? Or something else? I think part of the confusion comes from its shift from a lifestyle movement to a social movement.
In its early days, EA seemed to bear many characteristics of a lifestyle movement. Initial advocates often concentrated on individual actions—such as personal charitable donations optimised for maximum impact or career decisions that could yield the greatest benefit. The movement championed the notion that our day-to-day decisions, from where we donate to how we earn our keep, could be channelled in ways that maximised positive outcomes globally. In this regard, it centred around personal transformation and the choices one made in their daily life.
However, as EA has evolved and matured, there’s been a discernible shift. Today, whilst personal decisions and commitments remain at its heart, there’s an increasing emphasis on broader, systemic changes. The community now acknowledges that while individual actions are crucial, tackling the underlying causes of global challenges often necessitates a coordinated, collective effort. Effective Altruists are now engaging in policy advocacy, research to address large-scale global issues, and even the founding of organisations dedicated to high-impact interventions.
This transition towards the hallmarks of a social movement has implications for the leaders within the EA community. As the movement grows in scope and influence, community leaders are tasked with the responsibility of not only guiding individual decisions but also shaping collective strategies. This requires fostering collaborations, engaging with external stakeholders, and navigating the complexities of systemic change. Moreover, there’s an increased need for inclusivity and representation to ensure that the movement addresses diverse perspectives and challenges.
In conclusion, whilst EA might have originated in lifestyle choices, it’s blossoming into a robust social movement. For community leaders, this means adapting to new roles and responsibilities, aiming not just for personal improvement but for broader societal transformation.
Sure! Ultimately, I think we should be aiming for a movement that looks something like this.
In terms of behaviours that would signal people taking this seriously, an example might be a rebalancing of how community building work is evaluated. Currently, the main outcome funders look for is longtermist career changes. This encourages very lifestyle movement-y community building. I would like to see more weight being given to things like the generation of passive support, e.g., is the public shifting support towards the movement? Is the movement’s narrative being elevated in public discourse?
To use terminology I’ve used elsewhere, this change would encourage more ‘mobilising’ and less ‘organising’. It would also encourage a rebalancing of our ‘social change portfolio’ in such a way that we become a slightly more outward-facing movement, one that spends more time talking to and working with the rest of society to achieve shared objectives and less time talking to ourselves.
Rutger Bregman has just written a very nice story on how Rob Mather came to found AMF! Apart from a GWWC interview, I think this is the first time anyone has told this tale in detail. There are a few good lessons in there if you’re looking to start a high-impact org.
It’s in Dutch, but google translate works very well!
What do you believe is the ideal size for the Dutch EA community?
We recently posed this question in our national WhatsApp community. I was surprised by the result, and others I’ve spoken to were also surprised. I thought I’d post it here to get other takes.
We defined ‘being a member’ as “someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree program, donating a substantial portion of their income, working on EA-related projects, etc.”
We asked people to choose from the following:
<0.01% of population (<1,700)
0.01% of population (1,700)
0.1% of population (17,000)
1% of population (170,000)
>1% of population (>170,000)
I don’t know
The results:
28 votes in total
By far the most popular (24 votes) was “>1% of population (>170,000)”
“someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree program, donating a substantial portion of their income, working on EA-related projects, etc.
Why would you not want >1% of the population to fit this description? I think even prominent EA haters would be in favor, if you left out the name “EA” out.
People often argue for ‘Narrow EA’. Here is an example of where I suggested this strategy might not be wise and people disagreed.
Although of course, there’s an ‘at the current margin’ thing going on here. I.e., maybe the ideal size is huge, but since we’ve got limited time and resources we should not aim for that and instead focus on keeping it small and high quality.
Perhaps a more informative question would be something like, “For the next 5 years, should the Dutch EA community aim for broad growth or narrow specialisation?” (in other words, something similar to this Q from the MCF survey).
Yeah, I think you ended up asking “would it be good for a lot of people to share our values”, instead of “should we try to actively recruit tons of people to our specific community”
I asked, “As we plan our future initiatives, it’s useful to understand where our community believes we should focus our efforts. Please share your opinion on which of the following we should prioritise.
Growing the Community: Focus on increasing our membership and raising broader awareness of EA.
Developing Community Depth: Concentrate on deepening understanding and engagement.
Taking a Balanced Approach: Allocate our efforts equally between growing and deepening.
Other (Please specify): If you have a different perspective, we’d love to hear it.
I don’t know”
27 people voted, 16 voted for ‘taking a balanced approach’, 6 for ‘growing the community’, 1 for ‘developing community depth’, and 4 for ‘I don’t know’.
‘Narrow EA’ and having >1% of the population fitting the above description aren’t opposite strategies.
Maybe it’s similar to someone interested in animal welfare thinking alt protein coordination should focus on scientists, entrepreneurs, funders and policy makers but also thinking it would be good for there to be lots of people interested in veganism.
Aren’t they? Like, if I’m aiming for >1% of the population I ought to spend a lot of my resources on marketing and building a network of organisers. If I’m aiming for something smaller I ought to spend my time investing in the community I’ve already got and maybe some field building.
To make it more concrete, in Q1 of 2024 I could spend 15% of my time investing in our marketing so that we double the number of intro programme sign-ups; alternatively, I could put that time into developing a Dutch Existential Risk Initiative. One is big EA, one is narrow EA.
I think it depends on how you define ‘narrow EA’, if you focus on getting 1% of the population to give effectively, that’s different to helping 100 people make impactful career switches but both could be defined as narrow in different ways.
One being narrow as it focuses on a small number of people, one being narrow as it spreads a subset of EA ideas.
Taking the Dutch Existential Risk Initiative example, it will be narrow in terms of cause focus but the strategy could still vary between focusing on top academics or a mass media campaign.
I’m pretty sure Narrow EA is usually used to refer to the strategy of influencing a small number of particularly influential people. That’s part of what I’m pushing back against (although we’ve deviated from the original discussion point, which was on organising vs mobilising). [got confused about which quicktake we were discussing]
I think all of the ERIs are narrow (they target talented researchers). A more broad project would be the Existential Risk Observatory, which aims to inform the public through mass media outreach. They’ve done a lot of good work in the Netherlands and abroad, but I don’t think they’ve been able to get funding from the biggest EA funds. I don’t know why but I suspect it’s because their main focus is the general public, and not the decision-makers.
why would you like there to be less people “motivated in part by an impartial care for others, [are] thinking very carefully about how they can best help others [...]”?
edit: please ignore, just saw that titotal asked the same question 10 minutes earlier.
Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.
I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the ‘narrow EA’ strategy is a mistake because there’s a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it’s also important that ‘the getting our shit together’ has broad societal participation.
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like “the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism” then I wouldn’t be surprised to see articles saying “How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans”. Doesn’t mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I’d be surprised if there was a way around it. I also think it’s notable how much press there is that agrees with AI X-risk concerns; it’s not like there’s a consensus in the media that it should be dismissed.
+1; except that I would say we should expect to see more, and more high-profile.
AI xrisk is now moving from “weird idea that some academics and oddballs buy into” to “topic which is influencing and motivating significant policy interventions”, including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).
The former, for a lot of people (e.g. folks in AI/CS who didn’t ‘buy’ xrisk) was a minor annoyance. The latter is something that will concern them—either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.
I would think it’s reasonable to anticipate more of this.
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.
Why? Maybe I’m being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.
Furthermore, there’s still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
My honest perspective is if you’re an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you’re a small group, they’ll call you a conspiracy, and if you’re a large group, they’ll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn’t expect scrutiny to simply focus on the substance either way.
I agree that we should have more of an outside game in addition to an inside game, but I’d also note that efforts at developing an outside game could similarly face harsh criticism (e.g., “appealing to the base instincts of random individuals, taking advantage of these individuals’ confusion on the topic, to make up for their own lack of support from actual experts”).
Maybe I’m in a bubble, but I don’t recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as “uninformed mobs”. This article from the Daily Mail is about as close as it gets, but I think I’d rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.
Ultimately, I don’t think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.
And again, I know I sound like a broken record, but there’s also the issue of how appropriate it is for us to try to guide society without broader participation.
So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don’t personally find conservative media to be “reputable,” but (at least in the US, perhaps less so in the UK) around half the power will generally be held by conservatives (and perhaps more than half going forward).
Yeah, the phrase “woke mob” (and similar) is extremely common in conservative media!
I suspect the ideology of Politico and most EAs are not that different (i.e. technocratic liberal centrism).
For sure progressive publications will be more positive, and I don’t think conservative media ≠ reputable.
When I say “reputable publications” I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as “uninformed mobs”.
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn’t be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.)
I’d also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it’s not just a one-directional phenomena.
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define ‘reputable’ as ‘those organisations most trusted by the general public’, which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov’s method is flawed? That’s plausible.
But we’ve fallen into a bit of a digression here. As I see it, there are four cruxes:
Does a focus on the inside game make us vulnerable to the criticism that we’re a part of a conspiracy?
For me, yes.
Does this have the potential to undermine our efforts?
For me, yes.
If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts?
For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population).
Is it unquestionably OK to try to guide society without broader societal participation?
For me, no.
I think our biggest disagreement is with 3. I think it’s possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we’re a long, long way from that happening. You seem to think we’re much closer, is that correct? Could you explain why?
I don’t know where you stand on 4.
P.S. I’m enjoying this discussion, thanks for taking the time!
I agree and this is why I’m in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn’t claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but “EA” can’t—it isn’t a person, it’s a network.
I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum (effectivealtruism.org) from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA—communications and clarity around the EA brand are prioritised.
This is really interesting. Thanks for sharing!
I think:
If you have a lot of influence, articles like this are inevitable.
EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That’s where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true—this is a very hard dynamic to manage).
The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we’ll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
EA is also vulnerable to criticism as an elitist movement, and its interconnection with the AI industry will make it seem biased.
EA is not a unitary actor and EAs will often have opposing views on things. This makes any sort of reputation management quite challenging.
The most natural precedent to EA are the Freemasons and people hated them.
Thanks!
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don’t give a damn about EA but Politico is already publishing scathing pieces.
I don’t think reputation management is as hard as is often supposed in EA. I think it’s just it hasn’t been prioritised much until recently (e.g., CEA didn’t have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don’t have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it’s in our short-term interests) so I’m a bit pessimistic, though I agree it is a good idea to try. What’s a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don’t think there’s much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she’ll use her influence as strongly as possible in the ‘AI Ethics’ community.
Seth Lazar also seems intractably anti-EA. It’s annoying how much of this dialogue happens on Twitter/X, especially since it’s very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven’t seen where the Safety->Ethics hostility has been, I’ve really only ever seen the reverse, but of course I’m 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think there’s a strong anti-EA sentiment amongst the generally left-wing/critical-aligned parts of the ‘AI Ethics’ fields, and they aren’t taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and we’re in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying.
I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I found this comment very helpful Remmelt, so thank you. I think I’m going to respond to this comment via PM.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They’ve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can’t do fizzbuzz or know what a transformer is, thus they’ll just say sentences about how AI can’t do things and there’s a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and “Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.”
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
I don’t know why people overindex on loud grumpy twitter people. I haven’t seen evidence that most FAccT attendees are hostile and unsophisticated.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy “there are lots of good sensible AI ethics people with good ideas, we should co-operate with them”. I don’t actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It’s only the idea that “be co-operative” will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I’m a bit skeptical of. My claim is not “AI ethics bad”, but “you are unlikely to be able to persuade the most AI hostile figures within AI ethics”.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
My favourite recent example of (2) is this paper:
https://arxiv.org/pdf/2302.10329.pdf
Other examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g.
https://dl.acm.org/doi/10.1145/3278721.3278780
Another would be Haydn Belfield’s new collaboration with Kerry McInerney
http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/
Jess Whittlestone’s online engagements with Seth Lazar have been pretty productive, I thought.
I know you’re probably extremely busy, but if you’d like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I’m significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
I expect many communities would agree on working to restrict Big Tech’s use of AI to consolidate power. List of quotes from different communities here.
EA isn’t unitary so people should individually just try cooperating with them on stuff and being like “actually you’re right and AIs not being racist is important” or should try to make inroads on the actors’ strike/writer’s strike AI issues. Generally saying “hey I think you are right” is usually fairly ingratiating.
For what it’s worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
People should say that things are right when they agree with them, even if there wasn’t strategic purpose in doing so.
I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry.
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I’ll note that I stopped reading the linked article after “Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do “narrow EA” or “global EA”.
Politico is perhaps the most influential news source for EU decision-makers (h/t @vojtech_b). I’d be wary of dismissing the importance of ‘a few negative articles’ if they’re articles like this.
I agree that’s a good argument why that article is a bigger deal than it seems, but I’d still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
If this article sees others like it, it could cause the UK to back away from x-risk concerns
My concern is that this particular media narrative will eventually undermine the policy progress we’ve made.
>”Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo.
Could we get a survey on a few versions of this question? I think it’s actually super-rare in EA.
e.g.
“i believe super-intelligent AI should be pursued at all costs”
“I believe the benefits outweigh the risks of pursuing superintelligent AI”
“I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks”
“I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable”
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
Yeah it’s incredibly inaccurate,
I don’t think it even needs to be surveyed.I’ve heard versions of the claim multiple times, including from people i’d expect to know better, so having the survey data to back it up might be helpful even if we’re confident we know. the answer.
I think there are truths that are not so far from it. Some rationalists believe Superintelligent AI is necessary for an amazing future. Strong versions of AI Safety and AI capabilities are complementary memes that start from similar assumptions.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI “at all costs” to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA’s professed beliefs wouldn’t be entirely convincing to some people given the close connections between EAs and rationalists in AI.
Good point! You’re right
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you’d expect to know better, why they believe what they believe?
I think that EA has made the correct choice in deciding to focus on inside game. As indicated by the article, it seems like we’ve been incredibly successful at it. I agree that in an ideal world, we would save humanity by playing the outside game, but I feel that the current inside game is increasing our odds by enough that I feel very comfortable with our decision to promote it.
I agree that it’s worth thinking about the potential for this success to result in a backlash, though surveys seem to indicate more concern among the public about AI risks than I had expected, so I’m not especially worried about there being a significant public backlash.
Nonetheless, it doesn’t make sense to take unnecessary risks, so there are a few things we should do:
• I’d love to see EA develop more high-quality media properties like the 80k podcast, Rob Miles or Rationalist Animations, but very few people have the skills.
• Books combined with media releases and appearances on podcasts are one way in which we can attempt to increase our support among the public.
• I think it makes sense to try our best to avoid polarisation. If it seems that one side of the political spectrum is becoming hostile, then it would make sense to initiate some concerted outreach to it.
Thanks for your comment Chris! Although it appears contradictory? In the first half, you say we’ve made the right choice by focusing on the inside game, but in the second half, you suggest we expend more resources on outside game interventions.
Is your overall take that we should mostly do inside game stuff, but that perhaps we’re due a slight reallocation in the direction of the outside game?
Exactly. I think EA should mostly focus on inside game, but that, as a lesser priority, we should take steps to mitigate the risks associated with this.
I think there’s a good chance we broadly agree. If you had to put a number on it, what would you say is our current percentage split between inside game and outside game? And what would your new ideal split be?
epistemic status: gossip
I’ve heard it’s quite harmful to label oneself as EA in the EU policy space after the politico article.
I think maybe let’s revisit in a month. It’s easy for these things to loom larger than they are.
I think JanPro is talking about the EA and Brussels article I referenced in the OP (‘Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI’). This was published in November last year.
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
If what you and Jan say is true (not saying I doubt you, it doesn’t mesh with my experiences being an open EA but then I don’t live in the policy-world) then this does need to be higher up the EA priority list.
I’d strongly, strongly advise against ‘hiding’ beliefs here. If there is already a hostile set of opponents actively looking to discredit EA and EA-links then we need to be a lot more pro-active in countering incorrect framings of EA and being more assertive to opponents who think EA is worth discrediting.
I think one low hanging fruit is publicly dissociating from Elon Musk. He often gets brought up even though he’s not part of the community. There’s also very legitimate EA-/longtermism-based criticism of him available
Are you in a position to share more information that might help readers know how much they should update on this comment?
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
I’ve heard the same thing from US sources about the US policy space, to the extent that important information doesn’t get shared on the EA Forum because it would associate it with EA.
I think events are underrated in EA community building.
I have heard many people argue against organising relatively simple events such as, ‘get a venue, get a speaker, invite people’. I think the early success of the Tien Procent Club in the Netherlands should make people doubt that advice.
Why? Well, the first thing to mention is that they simply get great attendance, and their attendees are not typical EAs. I think their biggest so far has been 400, and the typical attendee is a professional in their 30s or 40s. It also does an amazing job of generating buzz. For example, suppose you’ve got a journalist writing an article about your community. In that case, it’s pretty cool if you can invite them to an event with hundreds of regular people in attendance.
Now, of course, attendance doesn’t translate to impact. However, I think we can see the early signs of people actually changing their behaviour.
For example, running a quick check on GWWC’s referral dashboard, I can see four pledges that refer to the Tien Procent Club (2 trial, 2 full). Based on GWWC’s March 2023 impact evaluation, they can therefore self-attribute ~$44k of 2022-equivalent donations to high-impact funding opportunities.
This is despite the fact they started less than two years ago and don’t have any funding other than what they have provided themselves or raised through selling tickets.
What’s more, it’s beginning to look like their formula works in different contexts. They started in Amsterdam, but since then they’ve seeded new organising teams elsewhere in the Netherlands, and the teams in Rotterdam and Utrecht have successfully organised their first events.
One caveat to all of this is that they received quite a bit of promotion from Rutger Bregman, a very prominent writer in the Netherlands. I know people are going to experiment with the TPC format abroad. I assume they won’t have a similar ambassador. It will therefore be interesting to see if the formula still works without such a resource.
In the meantime, my current takeaways are: get an endorsement from someone like Bregman + if your target audience is non-students who aren’t already EAs, put on events that only require shallow engagement but are good fun + focus on doing what you’re good at (e.g., they only do large events, and they only do them once every 3 months).
I’m actually very surprised to hear this. What does the “common view” presume then?
Personally, I see 3 tiers of events: 1. Any casual, low-commitment, low stakes events 2. Big EA conferences that I find quite valuable for meeting lots of people intentionally and socially 3. Professionally-focused events (research fellowships, incubators etc)
I think “simple” events like 1 are great for socialising and meeting new people. While 2 and 3 get more done, I don’t think the community would feel as welcoming if the only events occurring were ones where you had to be fully professional.
Sometimes I still want to interact with EAs, but without the expectation of “meeting right” or “networking”. I suspect this applies especially to introverts and beginners. Even just going to a conference with the expectation of booking lots of 1-on-1s vs just chilling feels very different.
Yeah, that’s a good categorisation, although often 3 is less ‘professionally focused events’ and more ‘events for highly committed EAs’.
I think the common EA CB view is captured in the below quote (my own italics), which is taken from the CEA’s Group Resource Centre’s page ‘How do EA groups produce impact?’.
I think this is broadly right. But I think EA CBs often overcorrect in this direction and, as a result, neglect events that aim for broad reach but shallow engagement.
On CB, my views that are half informed by EA CBs and half personal opinions:
Very casual events—If you are holding no events for a long time and don’t have much capacity, just hold low-stakes casual events and follow-up with high-engaged people afterwards. Highly-engaged people tend to show up/follow up several times after learning about EA anyway. 80-90% of the time, I think having some casual events every few weeks is better than no casual events.
Bigger events—Try to direct highly-engaged people to bigger and/or more specialised events. The EA community is big and diverse, and letting people know other events exist lets them self-select better. When I first explored beyond EA Singapore, I spent 2 months straight learning about every EA org and resource in existence, individually reviewing all the Swapcard profiles at every EAG. That was absolutely worth the effort, IMO.[1]
1-on-1s are probably still important − 1-on-1s with someone of very similar interest areas or career trajectories are the most valuable experiences in EA, in my opinion. Only 10% of 1-on-1s are like this, but they more than make up for the 90% that don’t really go anywhere. As much as I try to optimise, this seems to be a numbers game of just finding and meeting a lot of potentially interesting people.[2]
Online resources—For highly-engaged EAs, important information should be online-first. I’m of the opinion that highly-engaged/agentic new EAs tend to read a lot online, and can gain >80% of the same field-specific knowledge reading on their own. This especially holds true in AI Safety, which is like … code and research that’s all publicly available short of frontier models. I think events should be for casual socials, intentional networking and accountability+complex coordination (basically, coworkers).
If you want the 80⁄20 for AI Safety, check out aisafety.training, aisafety.world, check EA Forum, Lesswrong and Alignment Forum once a week (~1 hour/week), check 80k job board and EA Opportunities Board once a week (~20 minutes/week), review forum tags for things like prizes, job opportunities and research programs to see what programs were run last year that will be run again this year.
It is possible to capture all open opportunities this way. The rest is just researching interesting orgs, seeing which ones you vibe with and engaging with them. This is just for AI Safety, for other cause areas I’d expect the same amount of time spent passively checking.
My personal view is people should slightly prioritise “potentially interesting” over “potentially useful”. The few times I’ve met EAs just because they’re high-ranking, the conversation is usually generic and could have been had by Googling and emailing/texting.
I agree!
> I have heard many people argue against organising relatively simple events such as, ‘get a venue, get a speaker, invite people’.
Where have you heard this? I’ve not seen this.
> get an endorsement from someone like Bregman
Noting that this isn’t easy and could be a large driver of the value!
When I first started at EA Netherlands I was explicitly advised against it, and more generally it seems to be ‘in the air’. For example:
The groups resource hub says “This also suggests that you should focus time and effort on deeply engaging the most committed members rather than just shifting some choices of many people.”
Kuhan’s widely shared post on ‘lessons from running Stanford EA’ has in its summary “Focus on retention and deep engagement over shallow engagement”
CEA’s Groups Team’s post on ‘advice we give to new university organiser’ says “We think it’s good to do broad recruiting at the beginning of the semester, as with any club or activity. But beyond this big push of raising awareness, we think it’s most often better to pay more attention to people who seem very interested in—and willing to take significant action based on—EA ideas”
Writing this out has made me realise something. I think this advice makes more sense in a university context, where students are time-rich and are going through an intense social experience, but it makes less sense when you’re targeting professionals. I suspect it’s still ‘in the air’ because, historically, CEA has been very good at targeting students.
As a consequence, very few national orgs (including ourselves) organise TPC-esque events (broad reach, low engagement). For us, this is because our strategy is to focus on supporting local organisers in organising their own events (the theory is that then we can have lots of events without having to organise all of them ourselves). But I don’t think that’s the case for other national organisations (other national CBs, please jump in and correct me if I’m wrong, e.g., I know @lynn at EA UK has been organising career talks).
Ultimately, I guess what I’m saying is what I’ve said elsewhere: you need a blend of ‘mobilising’ (broad reach, low engagement) and ‘organising’ (narrow reach, high engagement), and I think EA groups often do too much organising.
Thanks, that makes sense.
I guess I don’t interpret those bullets as “arguing against organising simple events” but rather “put your effort into supporting more engaged people” and that could even be consistent with running simple events, since it means less time on broad outreach compared to e.g. a high-effort welcoming event.
I agree with the first part of your last sentence (the blend), I don’t know how EA groups spend their time.
Hmm, yeah, but by arguing for “put your effort into supporting more engaged people” you’re effectively arguing against “relatively large events that require relatively shallow engagement”. I think that’s the mistake. I think it should be an even blend of the two.
EA should take seriously its shift from a lifestyle movement to a social movement.
The debate surrounding EA and its classification has always been a lively one. Is it a movement? A philosophy? A question? An ideology? Or something else? I think part of the confusion comes from its shift from a lifestyle movement to a social movement.
In its early days, EA seemed to bear many characteristics of a lifestyle movement. Initial advocates often concentrated on individual actions—such as personal charitable donations optimised for maximum impact or career decisions that could yield the greatest benefit. The movement championed the notion that our day-to-day decisions, from where we donate to how we earn our keep, could be channelled in ways that maximised positive outcomes globally. In this regard, it centred around personal transformation and the choices one made in their daily life.
However, as EA has evolved and matured, there’s been a discernible shift. Today, whilst personal decisions and commitments remain at its heart, there’s an increasing emphasis on broader, systemic changes. The community now acknowledges that while individual actions are crucial, tackling the underlying causes of global challenges often necessitates a coordinated, collective effort. Effective Altruists are now engaging in policy advocacy, research to address large-scale global issues, and even the founding of organisations dedicated to high-impact interventions.
This transition towards the hallmarks of a social movement has implications for the leaders within the EA community. As the movement grows in scope and influence, community leaders are tasked with the responsibility of not only guiding individual decisions but also shaping collective strategies. This requires fostering collaborations, engaging with external stakeholders, and navigating the complexities of systemic change. Moreover, there’s an increased need for inclusivity and representation to ensure that the movement addresses diverse perspectives and challenges.
In conclusion, whilst EA might have originated in lifestyle choices, it’s blossoming into a robust social movement. For community leaders, this means adapting to new roles and responsibilities, aiming not just for personal improvement but for broader societal transformation.
P.S. This quick take was inspired by this post.
Could you describe this would look like? What behaviors/actions from people in EA what convince you that they are taking this seriously?
Sure! Ultimately, I think we should be aiming for a movement that looks something like this.
In terms of behaviours that would signal people taking this seriously, an example might be a rebalancing of how community building work is evaluated. Currently, the main outcome funders look for is longtermist career changes. This encourages very lifestyle movement-y community building. I would like to see more weight being given to things like the generation of passive support, e.g., is the public shifting support towards the movement? Is the movement’s narrative being elevated in public discourse?
To use terminology I’ve used elsewhere, this change would encourage more ‘mobilising’ and less ‘organising’. It would also encourage a rebalancing of our ‘social change portfolio’ in such a way that we become a slightly more outward-facing movement, one that spends more time talking to and working with the rest of society to achieve shared objectives and less time talking to ourselves.
Rutger Bregman has just written a very nice story on how Rob Mather came to found AMF! Apart from a GWWC interview, I think this is the first time anyone has told this tale in detail. There are a few good lessons in there if you’re looking to start a high-impact org.
It’s in Dutch, but google translate works very well!
What do you believe is the ideal size for the Dutch EA community?
We recently posed this question in our national WhatsApp community. I was surprised by the result, and others I’ve spoken to were also surprised. I thought I’d post it here to get other takes.
We defined ‘being a member’ as “someone who is motivated in part by an impartial care for others, is thinking very carefully about how they can best help others, and who is taking significant actions to help (most likely through their careers). In practice, this might look like selecting a job or degree program, donating a substantial portion of their income, working on EA-related projects, etc.”
We asked people to choose from the following:
<0.01% of population (<1,700)
0.01% of population (1,700)
0.1% of population (17,000)
1% of population (170,000)
>1% of population (>170,000)
I don’t know
The results:
28 votes in total
By far the most popular (24 votes) was “>1% of population (>170,000)”
3 people voted “0.1% of population (17,000)”
1 person voted “I don’t know”
I was expecting a split between 1,700 and 17,000.
Why would you not want >1% of the population to fit this description? I think even prominent EA haters would be in favor, if you left out the name “EA” out.
People often argue for ‘Narrow EA’. Here is an example of where I suggested this strategy might not be wise and people disagreed.
Although of course, there’s an ‘at the current margin’ thing going on here. I.e., maybe the ideal size is huge, but since we’ve got limited time and resources we should not aim for that and instead focus on keeping it small and high quality.
Perhaps a more informative question would be something like, “For the next 5 years, should the Dutch EA community aim for broad growth or narrow specialisation?” (in other words, something similar to this Q from the MCF survey).
Yeah, I think you ended up asking “would it be good for a lot of people to share our values”, instead of “should we try to actively recruit tons of people to our specific community”
Gave it a second go.
I asked, “As we plan our future initiatives, it’s useful to understand where our community believes we should focus our efforts. Please share your opinion on which of the following we should prioritise.
Growing the Community: Focus on increasing our membership and raising broader awareness of EA.
Developing Community Depth: Concentrate on deepening understanding and engagement.
Taking a Balanced Approach: Allocate our efforts equally between growing and deepening.
Other (Please specify): If you have a different perspective, we’d love to hear it.
I don’t know”
27 people voted, 16 voted for ‘taking a balanced approach’, 6 for ‘growing the community’, 1 for ‘developing community depth’, and 4 for ‘I don’t know’.
‘Narrow EA’ and having >1% of the population fitting the above description aren’t opposite strategies.
Maybe it’s similar to someone interested in animal welfare thinking alt protein coordination should focus on scientists, entrepreneurs, funders and policy makers but also thinking it would be good for there to be lots of people interested in veganism.
Aren’t they? Like, if I’m aiming for >1% of the population I ought to spend a lot of my resources on marketing and building a network of organisers. If I’m aiming for something smaller I ought to spend my time investing in the community I’ve already got and maybe some field building.
To make it more concrete, in Q1 of 2024 I could spend 15% of my time investing in our marketing so that we double the number of intro programme sign-ups; alternatively, I could put that time into developing a Dutch Existential Risk Initiative. One is big EA, one is narrow EA.
I think it depends on how you define ‘narrow EA’, if you focus on getting 1% of the population to give effectively, that’s different to helping 100 people make impactful career switches but both could be defined as narrow in different ways.
One being narrow as it focuses on a small number of people, one being narrow as it spreads a subset of EA ideas.
Taking the Dutch Existential Risk Initiative example, it will be narrow in terms of cause focus but the strategy could still vary between focusing on top academics or a mass media campaign.
I’m pretty sure Narrow EA is usually used to refer to the strategy of influencing a small number of particularly influential people.
That’s part of what I’m pushing back against (although we’ve deviated from the original discussion point, which was on organising vs mobilising).[got confused about which quicktake we were discussing]I think all of the ERIs are narrow (they target talented researchers). A more broad project would be the Existential Risk Observatory, which aims to inform the public through mass media outreach. They’ve done a lot of good work in the Netherlands and abroad, but I don’t think they’ve been able to get funding from the biggest EA funds. I don’t know why but I suspect it’s because their main focus is the general public, and not the decision-makers.
why would you like there to be less people “motivated in part by an impartial care for others, [are] thinking very carefully about how they can best help others [...]”?edit: please ignore, just saw that titotal asked the same question 10 minutes earlier.