1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, itâs worth considering that letting hostile ideas spread unchallenged may work out badly in the future.
2 - Iâm not sure itâs clear that âthe silent majority can often already see their mistakesâ in this case. I donât think this is a minor view on EA. I think a lot of people are sympathetic to Torresâ point of view, and a signficiant part of that is (in my opinion) because there wasnât a lot of pushback when they started making these claims in major outlets.
On my first comment, I agree that I donât think much could have been much done to stop Ămile turning against EA,[1] but I absolutely donât think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! Theyâre partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.
Was some pushback going to happen? Yes, but I donât think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who donât fully agree with us.
My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could put the movement and the causes we care about at risk. So looking at what happened in the Torres case, and looking at the negative depictions of Open Philanthropy recently, âignore itâ doesnât look like a good strategy
I donât think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.
Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I canât find any big name EAâs on this list, for example). This probably contributed to her current anti-EA stance.
As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
I donât want to fully re-litigate this history, as Iâm more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but youâd think OpenPhil would be open to being concerned about low-chance high-impact threats to it)
I donât think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
Agreed. It predated Emileâs public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldnât have become allies who have strong ideological influence over a large part of AI research space.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
âThe way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.â
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it wonât move the dial that much (and also is potentially lying, depending on context and your own opinions; we canât just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if youâre advocating spending actual EA money and labour on this, surely youâd first need to make a case that stuff âdealing with the short term harms of AIâ is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, donât believe in AI X-risk*, so you think itâs an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/âfeminist/âsocialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/âbetter as a cause area than standard near-term EA stuff or biorisk, canât take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what itâs worth, though Iâve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. Iâll freely confess that I am on the ânear term riskâ team, but that doesnât mean the two groups canât work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You havenât actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on âshort-term harmsâ? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on âshort-term harmsâ that we buy sympathy with the group of people concerned about âshort-term harmsâ, so we can later pass regulations together with them to reduce both âshort-term harmâ and AI x-risk.
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with).
And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
The relevant question isnât âare the important harms to be prioritised the existential harms or the non-existential ones?â, âwill AI be agents or not?â, nor âwill AI be stochastic parrots or superintelligence?â Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky.
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
I agree itâs a solid heuristic, but heuristics arenât foolproof and itâs important to be able to realise where theyâre not working.
I remembered your tweet about choosing intellectual opponents wisely because I think it be useful to show where we disagree on this:
1 - Choosing opponents is sometimes not up to you. As an analogy, being in a physical fight only takes one party to throw punches. When debates start to have significant consequences socially and politically, itâs worth considering that letting hostile ideas spread unchallenged may work out badly in the future.
2 - Iâm not sure itâs clear that âthe silent majority can often already see their mistakesâ in this case. I donât think this is a minor view on EA. I think a lot of people are sympathetic to Torresâ point of view, and a signficiant part of that is (in my opinion) because there wasnât a lot of pushback when they started making these claims in major outlets.
On my first comment, I agree that I donât think much could have been much done to stop Ămile turning against EA,[1] but I absolutely donât think it was inevitable that they would have had such a wide impact. They made the Bulletin of Atomic Scientists! Theyâre partnered with Timnit, who has large influence and sympathy in AI space! People who could have been potential allies in a coalition basically think our movement is evil.[2] They get sympathetically cited in academic criticisms of EA.
Was some pushback going to happen? Yes, but I donât think inevitably at this scale. I do think more could have been done to actually push back on their claims that went over the line in terms of hostility and accuracy, and I think that could have led to a better climate at this critical juncture for AI discussions and policy where we need to build coalitions with communities who donât fully agree with us.
My concern is that this new wave of criticism and attack on OpenPhil might not simply fade away but could instead cement an anti-EA narrative that could put the movement and the causes we care about at risk. So looking at what happened in the Torres case, and looking at the negative depictions of Open Philanthropy recently, âignore itâ doesnât look like a good strategy
Though they were part of EA space for a while, so thereâs probably some âcommon knowledgeâ that some people might have that paints a
I think the whole thread this tweet is a part of is worth reading
I donât think the hostility between the near-term harm and the AI x-riskers would have been prevented by more attack rebutting Emille Torres.
The real problem is that the near-term AI harm people perceive Ai x-riskers as ignoring their concerns and actively making the near term harms worse.
Unfortunately, I think this sentiment is at least partly accurate. When Timnit got pushed out of google for pointing out near-term harms of AI, there was almost no support from the x-risk crowd (I canât find any big name EAâs on this list, for example). This probably contributed to her current anti-EA stance.
As for real world harms, well, we can just say that openAI was started by an x-risker, and has kickstarted an AI race, causing a myriad of real world harms such as scams, art plagiarism, data theft, etc.
The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.
I donât want to fully re-litigate this history, as Iâm more concerned about the future of OpenPhilanthropy being blindsided by a political attack (it might be low probability, but youâd think OpenPhil would be open to being concerned about low-chance high-impact threats to it)
Agreed. It predated Emileâs public anti-EA turn for sure. But it was never inevitable. Indeed, supporting Timnit during her firing from Google may have been a super low-cost way to show solidarity. It might have meant that Emile and Timnit wouldnât have become allies who have strong ideological influence over a large part of AI research space.
Iâd like to think so too, but this is a bridge that needs to build from both ends imo, as I wouldnât recommend a unilateral action unless I really trusted the other parties involved.
There seems to have been some momentum towards more collaboration after the AI Safety Summit though. I hope the Bletchley Declaration can be an inflection point for more of this.
What do you see as the risk of building a bridge if itâs not reciprocated?
âThe way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.â
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it wonât move the dial that much (and also is potentially lying, depending on context and your own opinions; we canât just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if youâre advocating spending actual EA money and labour on this, surely youâd first need to make a case that stuff âdealing with the short term harms of AIâ is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, donât believe in AI X-risk*, so you think itâs an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/âfeminist/âsocialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/âbetter as a cause area than standard near-term EA stuff or biorisk, canât take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what itâs worth, though Iâve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. Iâll freely confess that I am on the ânear term riskâ team, but that doesnât mean the two groups canât work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You havenât actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on âshort-term harmsâ? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on âshort-term harmsâ that we buy sympathy with the group of people concerned about âshort-term harmsâ, so we can later pass regulations together with them to reduce both âshort-term harmâ and AI x-risk.
https://ââforum.effectivealtruism.org/ââposts/ââQ4rg6vwbtPxXW6ECj/ââwe-are-fighting-a-shared-battle-a-call-for-a-different (Itâs been a while since I read this so Iâm not sure it is what you are looking for, but Gideon Futerman had some ideas for what âbridge buildingâ might look like.)
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.