I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the ‘narrow EA’ strategy is a mistake because there’s a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it’s also important that ‘the getting our shit together’ has broad societal participation.
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like “the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism” then I wouldn’t be surprised to see articles saying “How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans”. Doesn’t mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I’d be surprised if there was a way around it. I also think it’s notable how much press there is that agrees with AI X-risk concerns; it’s not like there’s a consensus in the media that it should be dismissed.
+1; except that I would say we should expect to see more, and more high-profile.
AI xrisk is now moving from “weird idea that some academics and oddballs buy into” to “topic which is influencing and motivating significant policy interventions”, including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).
The former, for a lot of people (e.g. folks in AI/CS who didn’t ‘buy’ xrisk) was a minor annoyance. The latter is something that will concern them—either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.
I would think it’s reasonable to anticipate more of this.
either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.
Why? Maybe I’m being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.
Furthermore, there’s still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
the piece has an underlying narrative of a covert group exercising undue influence over the government
My honest perspective is if you’re an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you’re a small group, they’ll call you a conspiracy, and if you’re a large group, they’ll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn’t expect scrutiny to simply focus on the substance either way.
I agree that we should have more of an outside game in addition to an inside game, but I’d also note that efforts at developing an outside game could similarly face harsh criticism (e.g., “appealing to the base instincts of random individuals, taking advantage of these individuals’ confusion on the topic, to make up for their own lack of support from actual experts”).
Maybe I’m in a bubble, but I don’t recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as “uninformed mobs”. This article from the Daily Mail is about as close as it gets, but I think I’d rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.
Ultimately, I don’t think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.
And again, I know I sound like a broken record, but there’s also the issue of how appropriate it is for us to try to guide society without broader participation.
I don’t recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as “uninformed mobs”
So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don’t personally find conservative media to be “reputable,” but (at least in the US, perhaps less so in the UK) around half the power will generally be held by conservatives (and perhapsmore than half going forward).
For sure progressive publications will be more positive, and I don’t think conservative media ≠ reputable.
When I say “reputable publications” I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as “uninformed mobs”.
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn’t be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.)
I’d also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it’s not just a one-directional phenomena.
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define ‘reputable’ as ‘those organisations most trusted by the general public’, which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov’s method is flawed? That’s plausible.
But we’ve fallen into a bit of a digression here. As I see it, there are four cruxes:
Does a focus on the inside game make us vulnerable to the criticism that we’re a part of a conspiracy?
For me, yes.
Does this have the potential to undermine our efforts?
For me, yes.
If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts?
For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population).
Is it unquestionably OK to try to guide society without broader societal participation?
For me, no.
I think our biggest disagreement is with 3. I think it’s possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we’re a long, long way from that happening. You seem to think we’re much closer, is that correct? Could you explain why?
I don’t know where you stand on 4.
P.S. I’m enjoying this discussion, thanks for taking the time!
I agree and this is why I’m in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn’t claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but “EA” can’t—it isn’t a person, it’s a network.
If you have a lot of influence, articles like this are inevitable.
EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That’s where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true—this is a very hard dynamic to manage).
The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we’ll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
EA is also vulnerable to criticism as an elitist movement, and its interconnection with the AI industry will make it seem biased.
EA is not a unitary actor and EAs will often have opposing views on things. This makes any sort of reputation management quite challenging.
The most natural precedent to EA are the Freemasons and people hated them.
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don’t give a damn about EA but Politico is already publishing scathing pieces.
I don’t think reputation management is as hard as is often supposed in EA. I think it’s just it hasn’t been prioritised much until recently (e.g., CEA didn’t have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don’t have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it’s in our short-term interests) so I’m a bit pessimistic, though I agree it is a good idea to try. What’s a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don’t think there’s much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she’ll use her influence as strongly as possible in the ‘AI Ethics’ community.
Seth Lazar also seems intractably anti-EA. It’s annoying how much of this dialogue happens on Twitter/X, especially since it’s very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven’t seen where the Safety->Ethics hostility has been, I’ve really only ever seen the reverse, but of course I’m 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think there’s a strong anti-EA sentiment amongst the generally left-wing/critical-aligned parts of the ‘AI Ethics’ fields, and they aren’t taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and we’re in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
just really haven’t seen where the Safety->Ethics hostility has been
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment. But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying. I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They’ve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can’t do fizzbuzz or know what a transformer is, thus they’ll just say sentences about how AI can’t do things and there’s a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and “Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.”
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy “there are lots of good sensible AI ethics people with good ideas, we should co-operate with them”. I don’t actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It’s only the idea that “be co-operative” will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I’m a bit skeptical of. My claim is not “AI ethics bad”, but “you are unlikely to be able to persuade the most AI hostile figures within AI ethics”.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
I know you’re probably extremely busy, but if you’d like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I’m significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
I expect many communities would agree on working to restrict Big Tech’s use of AI to consolidate power. List of quotes from different communities here.
EA isn’t unitary so people should individually just try cooperating with them on stuff and being like “actually you’re right and AIs not being racist is important” or should try to make inroads on the actors’ strike/writer’s strike AI issues. Generally saying “hey I think you are right” is usually fairly ingratiating.
For what it’s worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
People should say that things are right when they agree with them, even if there wasn’t strategic purpose in doing so.
I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry.
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I’ll note that I stopped reading the linked article after “Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do “narrow EA” or “global EA”.
I agree that’s a good argument why that article is a bigger deal than it seems, but I’d still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
I’ve heard versions of the claim multiple times, including from people i’d expect to know better, so having the survey data to back it up might be helpful even if we’re confident we know. the answer.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI “at all costs” to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA’s professed beliefs wouldn’t be entirely convincing to some people given the close connections between EAs and rationalists in AI.
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you’d expect to know better, why they believe what they believe?
I think that EA has made the correct choice in deciding to focus on inside game. As indicated by the article, it seems like we’ve been incredibly successful at it. I agree that in an ideal world, we would save humanity by playing the outside game, but I feel that the current inside game is increasing our odds by enough that I feel very comfortable with our decision to promote it.
I agree that it’s worth thinking about the potential for this success to result in a backlash, though surveys seem to indicate more concern among the public about AI risks than I had expected, so I’m not especially worried about there being a significant public backlash.
Nonetheless, it doesn’t make sense to take unnecessary risks, so there are a few things we should do: • I’d love to see EA develop more high-quality media properties like the 80k podcast, Rob Miles or Rationalist Animations, but very few people have the skills. • Books combined with media releases and appearances on podcasts are one way in which we can attempt to increase our support among the public. • I think it makes sense to try our best to avoid polarisation. If it seems that one side of the political spectrum is becoming hostile, then it would make sense to initiate some concerted outreach to it.
Thanks for your comment Chris! Although it appears contradictory? In the first half, you say we’ve made the right choice by focusing on the inside game, but in the second half, you suggest we expend more resources on outside game interventions.
Is your overall take that we should mostly do inside game stuff, but that perhaps we’re due a slight reallocation in the direction of the outside game?
Exactly. I think EA should mostly focus on inside game, but that, as a lesser priority, we should take steps to mitigate the risks associated with this.
I think there’s a good chance we broadly agree. If you had to put a number on it, what would you say is our current percentage split between inside game and outside game? And what would your new ideal split be?
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
If what you and Jan say is true (not saying I doubt you, it doesn’t mesh with my experiences being an open EA but then I don’t live in the policy-world) then this does need to be higher up the EA priority list.
I’d strongly, strongly advise against ‘hiding’ beliefs here. If there is already a hostile set of opponents actively looking to discredit EA and EA-links then we need to be a lot more pro-active in countering incorrect framings of EA and being more assertive to opponents who think EA is worth discrediting.
I think one low hanging fruit is publicly dissociating from Elon Musk. He often gets brought up even though he’s not part of the community. There’s also very legitimate EA-/longtermism-based criticism of him available
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
I’ve heard the same thing from US sources about the US policy space, to the extent that important information doesn’t get shared on the EA Forum because it would associate it with EA.
Politico just published a fairly negative article about EA and UK politics. Previously they’ve published similar articles about EA and Brussels.
I think EA tends to focus on the inside game, or narrow EA, and I believe this increases the likelihood of articles such as this. I worry articles such as this will make people in positions of influence less likely to want to be associated with EA, and that this in the long run will undermine efforts to bring about the policy changes we desire. Still, of course, this focus on the inside game is also pretty cost-effective (for the short term, at least). Is it worth the trade-off? What do people think?
My gut feeling is that, putting to one side the question of which is the most effective strategy for reducing x-risk etc., the ‘narrow EA’ strategy is a mistake because there’s a good chance it is wrong to try to guide society without broader societal participation.
In other words, if MacAskill argues here we should get our shit together first and then either a) collectively decide on a way forward or b) allow for everyone to make their own way forward, I think it’s also important that ‘the getting our shit together’ has broad societal participation.
My guess is this is mostly just a product of success, and insofar as the political system increasingly takes AI X-risk seriously, we should expect to see stuff like this from time to time. If the tables were flipped and Sunak was instead pooh-poohing AI X-risk and saying things like “the safest path forward for AI is accelerating progress as fast as we can – slowing down would be Luddism” then I wouldn’t be surprised to see articles saying “How Silicon Valley accelerationists are shaping Rishi Sunak’s AI plans”. Doesn’t mean we should ignore the negative pieces, and there very well may be things we can do to decrease it at the margin, but ultimately, I’d be surprised if there was a way around it. I also think it’s notable how much press there is that agrees with AI X-risk concerns; it’s not like there’s a consensus in the media that it should be dismissed.
+1; except that I would say we should expect to see more, and more high-profile.
AI xrisk is now moving from “weird idea that some academics and oddballs buy into” to “topic which is influencing and motivating significant policy interventions”, including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).
The former, for a lot of people (e.g. folks in AI/CS who didn’t ‘buy’ xrisk) was a minor annoyance. The latter is something that will concern them—either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.
I would think it’s reasonable to anticipate more of this.
or because they feel it as a threat to their identity or self-image (I expect these to be even larger pain points than the two you identified)
Hmm, I agree that with influence comes increased scrutiny, and the trade-off is worth it in many cases, but I think there are various angles this scrutiny might come from, and I think this is a particularly bad one.
Why? Maybe I’m being overly sensitive but, to me, the piece has an underlying narrative of a covert group exercising undue influence over the government. If we had more of an outside game, I would expect the scrutiny to instead focus on either the substance of the issue or on the outside game actors. Either would probably be an improvement.
Furthermore, there’s still the very important issue of how appropriate it is for us to try to guide society without broader societal participation.
My honest perspective is if you’re an lone individual affecting policy, detractors will call you a wannabe-tyrant, if you’re a small group, they’ll call you a conspiracy, and if you’re a large group, they’ll call you an uninformed mob. Regardless, your political opponents will attempt to paint your efforts as illegitimate, and while certain lines of criticism may be more effective than others, I wouldn’t expect scrutiny to simply focus on the substance either way.
I agree that we should have more of an outside game in addition to an inside game, but I’d also note that efforts at developing an outside game could similarly face harsh criticism (e.g., “appealing to the base instincts of random individuals, taking advantage of these individuals’ confusion on the topic, to make up for their own lack of support from actual experts”).
Maybe I’m in a bubble, but I don’t recall seeing many reputable publications label large-scale progressive movements (e.g., BLM, Extinction Rebellion, or #MeToo) as “uninformed mobs”. This article from the Daily Mail is about as close as it gets, but I think I’d rather have the Daily Mail writing about a wild What We Ourselves party than Politico insinuating a conspiracy.
Ultimately, I don’t think any of us know the optimal split in a social change portfolio between the outside game and the inside game, so perhaps we should adapt as the criticism comes in. If we get a few articles insinuating conspiracy, maybe we should reallocate towards the outside game, and vice versa.
And again, I know I sound like a broken record, but there’s also the issue of how appropriate it is for us to try to guide society without broader participation.
So progressive causes will generally be portrayed positively by progressive-leaning media, but conservative-leaning media, meanwhile, has definitely portrayed all those movements as ~mobs (especially for BLM and Extinction Rebellion), and predecessor movements, such as for Civli Rights, were likewise often portrayed as mobs by detractors. Now, maybe you don’t personally find conservative media to be “reputable,” but (at least in the US, perhaps less so in the UK) around half the power will generally be held by conservatives (and perhaps more than half going forward).
Yeah, the phrase “woke mob” (and similar) is extremely common in conservative media!
I suspect the ideology of Politico and most EAs are not that different (i.e. technocratic liberal centrism).
For sure progressive publications will be more positive, and I don’t think conservative media ≠ reputable.
When I say “reputable publications” I am referring to the organisations at the top of this list of the most trusted news outlets in the US. My impression is that very few of these regularly characterise the aforementioned movements as “uninformed mobs”.
So I notice Fox ranks pretty low on that list, but if you click through to the link, they rank very high among Republicans (second to only the weather channel). Fox definitely uses rhetoric like that. After Fox (among Republicans) are Newsman and OAN, which similarly both use rhetoric like that. (And FWIW, I also wouldn’t be super surprised to see somewhat similar rhetoric from WSJ or Forbes, though probably said less bluntly.)
I’d also note that the left-leaning media uses somewhat similar rhetoric for conservative issues that are supported by large groups (e.g., Trumpism in general, climate denialism, etc), so it’s not just a one-directional phenomena.
Yes, I noticed that. Certain news organisations, which are trusted by an important subsection of the US population, often characterise progressive movements as uninformed mobs. That is clear. But if you define ‘reputable’ as ‘those organisations most trusted by the general public’, which seems like a reasonable definition, then, based on the YouGov analysis, Fox et al. is not reputable. But then maybe YouGov’s method is flawed? That’s plausible.
But we’ve fallen into a bit of a digression here. As I see it, there are four cruxes:
Does a focus on the inside game make us vulnerable to the criticism that we’re a part of a conspiracy?
For me, yes.
Does this have the potential to undermine our efforts?
For me, yes.
If we reallocate (to some degree) towards the outside game in an effort to hedge against this risk, are we likely to be labelled an uninformed mob, and thus undermine our efforts?
For me, no, not anytime soon (although, as you state, organisations such as Fox will do this before organisations such as PBS, and Fox is trusted by an important subsection of the US population).
Is it unquestionably OK to try to guide society without broader societal participation?
For me, no.
I think our biggest disagreement is with 3. I think it’s possible to undermine our efforts by acting in such a way that organisations such as Fox characterise us as an uninformed mob. However, I think we’re a long, long way from that happening. You seem to think we’re much closer, is that correct? Could you explain why?
I don’t know where you stand on 4.
P.S. I’m enjoying this discussion, thanks for taking the time!
I agree and this is why I’m in favour of a Big Tent approach to EA. This risk comes from a lack of understanding about the diversity of thought within EA and that it isn’t claiming to have all the answers. There is a danger that poor behaviour from one part of the movement can impact other parts.
Broadly EA is about taking a Scout Mindset approach to doing good with your donations, career and time. Individual EAs and organisations can have opinions on what cause areas need more resources at the margin but “EA” can’t—it isn’t a person, it’s a network.
I really liked this post How CEA’s communications team is thinking about EA communications at the moment — EA Forum (effectivealtruism.org) from @Shakeel Hashim and hope that whatever happens in terms of shake ups at CEA—communications and clarity around the EA brand are prioritised.
This is really interesting. Thanks for sharing!
I think:
If you have a lot of influence, articles like this are inevitable.
EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That’s where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true—this is a very hard dynamic to manage).
The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we’ll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
EA is also vulnerable to criticism as an elitist movement, and its interconnection with the AI industry will make it seem biased.
EA is not a unitary actor and EAs will often have opposing views on things. This makes any sort of reputation management quite challenging.
The most natural precedent to EA are the Freemasons and people hated them.
Thanks!
I agree that negative articles are inevitable if you get influence, but I think there are various angles these negative articles might come from, and this is a particularly bad one.
The Soros point is an excellent analogy, but I worry we could be headed for something worse than that. Soros gets criticism from people like Orban but praise from orgs like the FT and Politico. Meanwhile, with EA, people like Orban don’t give a damn about EA but Politico is already publishing scathing pieces.
I don’t think reputation management is as hard as is often supposed in EA. I think it’s just it hasn’t been prioritised much until recently (e.g., CEA didn’t have a head of comms until September 2022). I can imagine many national organisations such as mine would love to have a Campaign Officer or something to help us manage it, but we don’t have the funding.
Do you have any encouraging examples of progress on 2? Some of the prominent people are incredibly hostile (i.e. they genuinely believe we are all literal fascists and also Machiavellian naive utilitarians who lie automatically whenever it’s in our short-term interests) so I’m a bit pessimistic, though I agree it is a good idea to try. What’s a good goal to help them accomplish in your view?
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
epistemic status: a frustrated outlet for sad thoughts, could definitely be reworded with more nuance
I really wish I had your positive view on this Sean, but I really don’t think there’s much chance of inroads unless capabilities advance to an extent that makes xRisk seem even more salient.
Gebru is, imo, never going to view EA positively. And she’ll use her influence as strongly as possible in the ‘AI Ethics’ community.
Seth Lazar also seems intractably anti-EA. It’s annoying how much of this dialogue happens on Twitter/X, especially since it’s very difficult for me as a non-Twitter user to find them, but I remember he posted one terrible anti-longtermist thread and later deleted it.
Shannon Vallor once also posted a similarly anti-longtermist thread, and then respond to Jess Whittlestone once saying lamenting the gap between the Safety and Ethics fields. I just really haven’t seen where the Safety->Ethics hostility has been, I’ve really only ever seen the reverse, but of course I’m 100% sure my sample is biased here.
The Belfield<>McInerney collaboration is extremely promising for sure, and I look forward to the outputs. I hope my impression is wrong and more work along these lines can happen.
But I really think there’s a strong anti-EA sentiment amongst the generally left-wing/critical-aligned parts of the ‘AI Ethics’ fields, and they aren’t taking any prisoners. In there eyes AI xRisk Safety is bad, EA is bad, and we’re in a direct zero-sum conflict over public attention and power. I think offering a hand is commendable, but any AI Safety researchers reading better have their shield at the ready just in case the hostile attacks come.
From the perspective of the AI Ethics researchers, AI Safety researchers and engineers contributed to the development of “everything for everyone” models – and also distracted away from the increasing harms that result from the development and use of those models.
Which, frankly, is both true, given how much people in AI Safety collaborated and mingled with people in large AI labs.
I understand that on Twitter, AI Ethics researchers are explicitly critiquing AI Safety folk (and longtermist tech folk in general) more than the other way around.
That feels unfair if we focus on the explicit exchange in the moment.
But there is more to it.
AI Ethics folk are responding with words to harms that resulted from misguided efforts by some key people in AI Safety in the past. There are implicit background goings-on they are concerned about that is hard to convey, and not immediately obvious from their writing.
It might not feel like we in AI Safety have much power in steering the development of large AI models, but historically the AI Safety community has been able to exert way more influence here than the AI Ethics community.
I understand if you look at tweets by people like Dr Gebru, that it can appear overly intense and like it’s not warranted (what did we ever say to them?). But we need to be aware of the historical position of power that the AI Safety community has actually had, what narratives we ended up spreading (moving the Overton window over “AGI”), and what that has led to.
From the perspective of AI Ethics researchers, here is this dominant group of longtermists broadly that has overall caused all this damage. And AI Ethics people are organising and screaming from the top of their lungs to get the harms to stop.
From their perspective, they need to put pressure on longtermists, and they need to call them out in public, otherwise the harms will continue. The longtermists are not as much aware of those harms (or don’t care about that much compared to their techno-future aspirations), so longtermists see it as unfair/bad to be called out this way as a group.
Then when AI Ethics researchers critique us with words, some people involved around our community (usually the more blatant ones) are like “why are you so mean to us? why are you saying transhumanists are like eugenicists? why are you against us trying to steer technological progress? why don’t you consider extinction risks”?.
Hope that’s somewhat clarifying.
I know this is not going to resonate for many people here, so I’m ready for the downvotes.
I found this comment very helpful Remmelt, so thank you. I think I’m going to respond to this comment via PM.
I think this is imprecise. In my mind there are two categories:
People who think EA is a distraction from near term issues and competing for funding and attention (e.g. Seth Lazar as seen by his complaints about the UK taskforce and trying to tag Dustin Moskovitz and Ian Hogarth in his thinkpieces). These more classical ethicists are just from what I can see analytical philosophers looking for funding and clout competition with EA. They’ve lost a lot of social capital because they repeated a lot of old canards about AI and just repeats them. My model for them is something akin to they can’t do fizzbuzz or know what a transformer is, thus they’ll just say sentences about how AI can’t do things and there’s a lot of hype and power centralisation. These are more likely to be white men from the UK, Canada, Australia, and NZ. Status games are especially important to them and they seem to just not have a great understanding of the field of alignment at all. A good example I show people is this tweet which tries to say RLHF solves alignment and “Paul [Christiano] is an actual researcher I respect, the AI alignment people that bother me are more the longtermists.”
People in the other camp are more likely to think EA is problematic and power hungry and covers for big tech. People in this camp would be your Dr. Gebru, DAIR etc. I think these individuals are often much more technically proficient than the people in the first camp and their view of EA is more akin to seeing EA as a cult that seeks to indoctrinate within a bundle of longtermist beliefs and carry water for AI labs. I will say the strategic collaborations are more fruitful here because there is more technical proficiency and personally I believe the latter group have better epistemics and are more truth-seeking even if much more acerbic in their rhetoric. The higher level of technical proficiency means they can contribute to the UK Task force on things like cybersecurity and evals.
I think measuring along only the axis of tractability of gaining allies is the wrong question but the real question is what are the fruits of collaboration.
I don’t know why people overindex on loud grumpy twitter people. I haven’t seen evidence that most FAccT attendees are hostile and unsophisticated.
FAccT attendees are mostly a distinct group of researchers from the AI ethics researchers who come from or are actively assisting marginalised communities (and not with eg. fairness and bias abstractions).
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
I totally buy “there are lots of good sensible AI ethics people with good ideas, we should co-operate with them”. I don’t actually think that all of the criticisms of EA from the harshest critics are entirely wrong either. It’s only the idea that “be co-operative” will have much effect on whether articles like this get written and hostile quotes from some prominent AI ethics people turn up in them, that I’m a bit skeptical of. My claim is not “AI ethics bad”, but “you are unlikely to be able to persuade the most AI hostile figures within AI ethics”.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
My favourite recent example of (2) is this paper:
https://arxiv.org/pdf/2302.10329.pdf
Other examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g.
https://dl.acm.org/doi/10.1145/3278721.3278780
Another would be Haydn Belfield’s new collaboration with Kerry McInerney
http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/
Jess Whittlestone’s online engagements with Seth Lazar have been pretty productive, I thought.
I know you’re probably extremely busy, but if you’d like to see more collaboration between the x-risks community and ai ethics, it might be worth writing up a list of ways in which you think we could collaborate as a top-level post.
I’m significantly more enthusiastic about the potential for collaboration after seeing the impact of the FLI letter.
I expect many communities would agree on working to restrict Big Tech’s use of AI to consolidate power. List of quotes from different communities here.
EA isn’t unitary so people should individually just try cooperating with them on stuff and being like “actually you’re right and AIs not being racist is important” or should try to make inroads on the actors’ strike/writer’s strike AI issues. Generally saying “hey I think you are right” is usually fairly ingratiating.
For what it’s worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI
People should say that things are right when they agree with them, even if there wasn’t strategic purpose in doing so.
I doubt being sympathetic to left economic stuff on AI will do much to help persuade people whose complaint is that EAs are racists/sexist/authoritarian/naive utilitarian. Though it would certainly help with people who are just (totally reasonably!, I am worried about this!) concerned about EAs ties to the industry.
The UK seems to take the existential risk from AI much more seriously than I would have expected a year ago. To me, this seems very important for the survival of our species, and seems well worth a few negative articles.
I’ll note that I stopped reading the linked article after “Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo. In general, having low-quality negative articles written about EA will be hard to avoid, no matter if you do “narrow EA” or “global EA”.
Politico is perhaps the most influential news source for EU decision-makers (h/t @vojtech_b). I’d be wary of dismissing the importance of ‘a few negative articles’ if they’re articles like this.
I agree that’s a good argument why that article is a bigger deal than it seems, but I’d still be quite surprised if it were at all comparable to the EV of having the UK so switch on when it comes to alignment.
If this article sees others like it, it could cause the UK to back away from x-risk concerns
My concern is that this particular media narrative will eventually undermine the policy progress we’ve made.
>”Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo.
Could we get a survey on a few versions of this question? I think it’s actually super-rare in EA.
e.g.
“i believe super-intelligent AI should be pursued at all costs”
“I believe the benefits outweigh the risks of pursuing superintelligent AI”
“I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks”
“I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable”
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
Yeah it’s incredibly inaccurate,
I don’t think it even needs to be surveyed.I’ve heard versions of the claim multiple times, including from people i’d expect to know better, so having the survey data to back it up might be helpful even if we’re confident we know. the answer.
I think there are truths that are not so far from it. Some rationalists believe Superintelligent AI is necessary for an amazing future. Strong versions of AI Safety and AI capabilities are complementary memes that start from similar assumptions.
Where I think most EAs would strongly disagree with is that they would find pursuing SAI “at all costs” to be abhorrent and counter to their fundamental goals. But I also suspect that showing survey data about EA’s professed beliefs wouldn’t be entirely convincing to some people given the close connections between EAs and rationalists in AI.
Good point! You’re right
I feel a bit uneasy that EAs should put in a lot of effort into a survey (both the survey designers and takers) just because someone made up something at some point. Maybe asking the people who you’d expect to know better, why they believe what they believe?
I think that EA has made the correct choice in deciding to focus on inside game. As indicated by the article, it seems like we’ve been incredibly successful at it. I agree that in an ideal world, we would save humanity by playing the outside game, but I feel that the current inside game is increasing our odds by enough that I feel very comfortable with our decision to promote it.
I agree that it’s worth thinking about the potential for this success to result in a backlash, though surveys seem to indicate more concern among the public about AI risks than I had expected, so I’m not especially worried about there being a significant public backlash.
Nonetheless, it doesn’t make sense to take unnecessary risks, so there are a few things we should do:
• I’d love to see EA develop more high-quality media properties like the 80k podcast, Rob Miles or Rationalist Animations, but very few people have the skills.
• Books combined with media releases and appearances on podcasts are one way in which we can attempt to increase our support among the public.
• I think it makes sense to try our best to avoid polarisation. If it seems that one side of the political spectrum is becoming hostile, then it would make sense to initiate some concerted outreach to it.
Thanks for your comment Chris! Although it appears contradictory? In the first half, you say we’ve made the right choice by focusing on the inside game, but in the second half, you suggest we expend more resources on outside game interventions.
Is your overall take that we should mostly do inside game stuff, but that perhaps we’re due a slight reallocation in the direction of the outside game?
Exactly. I think EA should mostly focus on inside game, but that, as a lesser priority, we should take steps to mitigate the risks associated with this.
I think there’s a good chance we broadly agree. If you had to put a number on it, what would you say is our current percentage split between inside game and outside game? And what would your new ideal split be?
epistemic status: gossip
I’ve heard it’s quite harmful to label oneself as EA in the EU policy space after the politico article.
I think maybe let’s revisit in a month. It’s easy for these things to loom larger than they are.
I think JanPro is talking about the EA and Brussels article I referenced in the OP (‘Stop the killer robots! Musk-backed lobbyists fight to save Europe from bad AI’). This was published in November last year.
Many of the EAs I know who work in policy feel like they ought to keep their involvement in EA a secret. I once attended an event in Brussels where the host asked me to hide the fact I work for EA Netherlands. This was because they were worried their opponents would use their links with EA to discredit them. This seems like a very bad state of affairs.
If what you and Jan say is true (not saying I doubt you, it doesn’t mesh with my experiences being an open EA but then I don’t live in the policy-world) then this does need to be higher up the EA priority list.
I’d strongly, strongly advise against ‘hiding’ beliefs here. If there is already a hostile set of opponents actively looking to discredit EA and EA-links then we need to be a lot more pro-active in countering incorrect framings of EA and being more assertive to opponents who think EA is worth discrediting.
I think one low hanging fruit is publicly dissociating from Elon Musk. He often gets brought up even though he’s not part of the community. There’s also very legitimate EA-/longtermism-based criticism of him available
Are you in a position to share more information that might help readers know how much they should update on this comment?
No, not really, I am myself confused and wanted to provoke those who know more to reply and clarify. (Which already James Herbert slightly did and I hope more direct info will surface)
I’ve heard the same thing from US sources about the US policy space, to the extent that important information doesn’t get shared on the EA Forum because it would associate it with EA.