Last time there was an explicitly hostile media campaign against EA the reaction was not to do anything, and the result is that Émile P. Torres has a large media presence,[1] launched the term TESCREAL to some success, and EA-critical thoughts became a lot more public and harsh in certain left-ish academic circles.
You say this as if there were ways to respond which would have prevented this. I’m not sure these exist, and in general I think “ignore it” is a really really solid heuristic in an era where conflict drives clicks.
I think responding in a way that is calm, boring, and factual will help. It’s not going to get Émile to publicly recant anything. The goal is just for people who find Émile’s stuff to see that there’s another side to the story. They aren’t going to publicly say “yo Émile I think there might be another side to the story”. But fewer of them will signal boost their writings on the theory that “EAs have nothing to say in their own defense, therefore they are guilty”. Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.
Maybe it would be useful to discuss concrete examples of engagement and think about what’s been helpful/harmful.
Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
I think this post was probably positive (I read it around a year ago, my recollections are a bit vague). But I guess that post itself could be an argument that even attempting to engage with Émile in good faith is potentially dangerous.
Perhaps the right strategy is something like: assume good faith, except with specific critics who have a known history of bad faith. And consider that your comparative advantage may lie elsewhere, unless others would describe you as unusually good at being charitable.
Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.
I would guess any legitimization of Émile by Nathan was symmetrical with a legitimization of Nathan by Émile. However I didn’t get the sense that either was legitimizing the other, so much as both were legitimizing GiveDirectly. It seems valuable to legitimize GiveDirectly, especially among the “left-ish academic circles” reading Émile who might otherwise believe that Émile is against all EA causes/organizations. (And among “left-ish academics” who might otherwise believe that Nathan scorns “near-termist” causes.)
There’s a lot of cause prioritization disagreement within EA, but it doesn’t usually get vicious, in part because EAs have “skin in the game” with regard to using their time & money in order to make the world a better place. One hypothesis is that if we can get Émile’s audience to feel some genuine curiosity about how to make their holiday giving effective, they’ll wonder why some people are longtermists. I think it’s absolutely fine to disagree with longtermism, but I also think that longtermists are generally thoughtful and well-intentioned, and it’s worth understanding why they give to the causes they do.
most of the donations funging heavily against other EA causes
Do you have specific reasons to believe this? It’s a possibility, but I could just as easily see most donations coming from non-EAs, or EAs who consider GiveDirectly a top pick anyways. Even if EA donors didn’t consider GiveDirectly a top pick on its own, they might have considered “GiveDirectly plus better relations with Émile with no extra cost” to be a top pick, and I feel hesitant to judge this more harshly than I would judge any other EA cause prioritization.
BTW, a mental model here is:
...it is striking by how often these shifts in opinion appear, upon closer inspection, to be triggered by Torres experiencing a feeling of rejection, such as being denied a job, not being invited to a podcast, or having a book collaboration terminated. Torres’s subsequent “realization” that these people and communities, once held in such high esteem, were in fact profoundly evil or dangerous routinely comes after those personal setbacks, as a post hoc rationalization.
If Émile is motivated to attack EA because they feel rejected by it, it’s conceivable to me that their motivation for aggression would decrease if a super kind and understanding therapist-type person listened to them really well privately and helped them feel heard & understood. The fundraiser thing makes me think this could work if done well, although the Helen Pluckrose thing from Mark’s post makes me think it’s risky. But if it’s private, especially from a person who’s not particularly well-known, I assume it wouldn’t run the specific risk of legitimization.
1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, it’s worth considering that letting hostile ideas spread unchallenged may work out badly in the future.
2 - I’m not sure it’s clear that “the silent majority can often already see their mistakes” in this case. I don’t think this is a minor view on EA. I think a lot of people are sympathetic to Torres’ point of view, and a signficiant part of that is (in my opinion) because there wasn’t a lot of pushback when they started making these claims in major outlets.
On my first comment, I agree that I don’t think much could have been much done to stop Émile turning against EA,[1] but I absolutely don’t think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! They’re partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.
Was some pushback going to happen? Yes, but I don’t think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who don’t fully agree with us.
My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could put the movement and the causes we care about at risk. So looking at what happened in the Torres case, and looking at the negative depictions of Open Philanthropy recently, ‘ignore it’ doesn’t look like a good strategy
I don’t think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.
Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I can’t find any big name EA’s on this list, for example). This probably contributed to her current anti-EA stance.
As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
I don’t want to fully re-litigate this history, as I’m more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but you’d think OpenPhil would be open to being concerned about low-chance high-impact threats to it)
I don’t think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
Agreed. It predated Emile’s public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldn’t have become allies who have strong ideological influence over a large part of AI research space.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
‘The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.’
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it won’t move the dial that much (and also is potentially lying, depending on context and your own opinions; we can’t just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if you’re advocating spending actual EA money and labour on this, surely you’d first need to make a case that stuff “dealing with the short term harms of AI” is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, don’t believe in AI X-risk*, so you think it’s an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/feminist/socialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/better as a cause area than standard near-term EA stuff or biorisk, can’t take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what it’s worth, though I’ve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. I’ll freely confess that I am on the “near term risk” team, but that doesn’t mean the two groups can’t work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You haven’t actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on “short-term harms”? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on “short-term harms” that we buy sympathy with the group of people concerned about “short-term harms”, so we can later pass regulations together with them to reduce both “short-term harm” and AI x-risk.
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with).
And I imagine it will be quite hard to convince most AI xrisk people of “whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; ”. If we were to adopt Gideon’s desired framing, it looks like we would need to make sacrifices in epistemics. Related:
The relevant question isn’t “are the important harms to be prioritised the existential harms or the non-existential ones?”, “will AI be agents or not?’, nor ’will AI be stochastic parrots or superintelligence?” Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky.
Some of Gideon’s suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
For the record, I’m very willing to be corrected/amend my Quick Take (and my beliefs on this is in general) if “ignore it” isn’t an accurate summary of what was done. Perhaps there was internal action taken with academic spaces/EA organisations that I’m not aware of? I still think the net effect of EA actions in any case was closer to “ignore it”, but the literal strong claim may be incorrect.
You say this as if there were ways to respond which would have prevented this. I’m not sure these exist, and in general I think “ignore it” is a really really solid heuristic in an era where conflict drives clicks.
I think responding in a way that is calm, boring, and factual will help. It’s not going to get Émile to publicly recant anything. The goal is just for people who find Émile’s stuff to see that there’s another side to the story. They aren’t going to publicly say “yo Émile I think there might be another side to the story”. But fewer of them will signal boost their writings on the theory that “EAs have nothing to say in their own defense, therefore they are guilty”. Also, I think people often interpret silence as a contemptuous response, and that can be enraging in itself.
Maybe it would be useful to discuss concrete examples of engagement and think about what’s been helpful/harmful.
Offhand, I would guess that the holiday fundraiser that Émile and Nathan Young ran (for GiveDirectly I think it was?) was positive.
I think this post was probably positive (I read it around a year ago, my recollections are a bit vague). But I guess that post itself could be an argument that even attempting to engage with Émile in good faith is potentially dangerous.
Perhaps the right strategy is something like: assume good faith, except with specific critics who have a known history of bad faith. And consider that your comparative advantage may lie elsewhere, unless others would describe you as unusually good at being charitable.
What makes you think this? I would guess it was pretty negative, by legitimizing Torres, and most of the donations funging heavily against other EA causes.
I would guess any legitimization of Émile by Nathan was symmetrical with a legitimization of Nathan by Émile. However I didn’t get the sense that either was legitimizing the other, so much as both were legitimizing GiveDirectly. It seems valuable to legitimize GiveDirectly, especially among the “left-ish academic circles” reading Émile who might otherwise believe that Émile is against all EA causes/organizations. (And among “left-ish academics” who might otherwise believe that Nathan scorns “near-termist” causes.)
There’s a lot of cause prioritization disagreement within EA, but it doesn’t usually get vicious, in part because EAs have “skin in the game” with regard to using their time & money in order to make the world a better place. One hypothesis is that if we can get Émile’s audience to feel some genuine curiosity about how to make their holiday giving effective, they’ll wonder why some people are longtermists. I think it’s absolutely fine to disagree with longtermism, but I also think that longtermists are generally thoughtful and well-intentioned, and it’s worth understanding why they give to the causes they do.
Do you have specific reasons to believe this? It’s a possibility, but I could just as easily see most donations coming from non-EAs, or EAs who consider GiveDirectly a top pick anyways. Even if EA donors didn’t consider GiveDirectly a top pick on its own, they might have considered “GiveDirectly plus better relations with Émile with no extra cost” to be a top pick, and I feel hesitant to judge this more harshly than I would judge any other EA cause prioritization.
BTW, a mental model here is:
https://markfuentes1.substack.com/p/emile-p-torress-history-of-dishonesty
If Émile is motivated to attack EA because they feel rejected by it, it’s conceivable to me that their motivation for aggression would decrease if a super kind and understanding therapist-type person listened to them really well privately and helped them feel heard & understood. The fundraiser thing makes me think this could work if done well, although the Helen Pluckrose thing from Mark’s post makes me think it’s risky. But if it’s private, especially from a person who’s not particularly well-known, I assume it wouldn’t run the specific risk of legitimization.
[edited to fix pronouns, sorry!!]
FYI, Émile’s pronouns are they/them.
[Edit: I really don’t like that this comment got downvoted and disagree voted...]
I agree it’s a solid heuristic, but heuristics aren’t foolproof and it’s important to be able to realise where they’re not working.
I remembered your tweet about choosing intellectual opponents wisely because I think it be useful to show where we disagree on this:
1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, it’s worth considering that letting hostile ideas spread unchallenged may work out badly in the future.
2 - I’m not sure it’s clear that “the silent majority can often already see their mistakes” in this case. I don’t think this is a minor view on EA. I think a lot of people are sympathetic to Torres’ point of view, and a signficiant part of that is (in my opinion) because there wasn’t a lot of pushback when they started making these claims in major outlets.
On my first comment, I agree that I don’t think much could have been much done to stop Émile turning against EA,[1] but I absolutely don’t think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! They’re partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.
Was some pushback going to happen? Yes, but I don’t think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who don’t fully agree with us.
My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could put the movement and the causes we care about at risk. So looking at what happened in the Torres case, and looking at the negative depictions of Open Philanthropy recently, ‘ignore it’ doesn’t look like a good strategy
Though they were part of EA space for a while, so there’s probably some ‘common knowledge’ that some people might have that paints a
I think the whole thread this tweet is a part of is worth reading
I don’t think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.
Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I can’t find any big name EA’s on this list, for example). This probably contributed to her current anti-EA stance.
As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
I don’t want to fully re-litigate this history, as I’m more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but you’d think OpenPhil would be open to being concerned about low-chance high-impact threats to it)
Agreed. It predated Emile’s public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldn’t have become allies who have strong ideological influence over a large part of AI research space.
I’d like to think so too, but this is a bridge that needs to build from both ends imo, as I wouldn’t recommend a unilateral action unless I really trusted the other parties involved.
There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
What do you see as the risk of building a bridge if it’s not reciprocated?
‘The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.’
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it won’t move the dial that much (and also is potentially lying, depending on context and your own opinions; we can’t just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if you’re advocating spending actual EA money and labour on this, surely you’d first need to make a case that stuff “dealing with the short term harms of AI” is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, don’t believe in AI X-risk*, so you think it’s an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/feminist/socialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/better as a cause area than standard near-term EA stuff or biorisk, can’t take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what it’s worth, though I’ve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. I’ll freely confess that I am on the “near term risk” team, but that doesn’t mean the two groups can’t work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You haven’t actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on “short-term harms”? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on “short-term harms” that we buy sympathy with the group of people concerned about “short-term harms”, so we can later pass regulations together with them to reduce both “short-term harm” and AI x-risk.
https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different (It’s been a while since I read this so I’m not sure it is what you are looking for, but Gideon Futerman had some ideas for what “bridge building” might look like.)
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of “whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; ”. If we were to adopt Gideon’s desired framing, it looks like we would need to make sacrifices in epistemics. Related:
Some of Gideon’s suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
.
For the record, I’m very willing to be corrected/amend my Quick Take (and my beliefs on this is in general) if “ignore it” isn’t an accurate summary of what was done. Perhaps there was internal action taken with academic spaces/EA organisations that I’m not aware of? I still think the net effect of EA actions in any case was closer to “ignore it”, but the literal strong claim may be incorrect.