âThe way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.â
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it wonât move the dial that much (and also is potentially lying, depending on context and your own opinions; we canât just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if youâre advocating spending actual EA money and labour on this, surely youâd first need to make a case that stuff âdealing with the short term harms of AIâ is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, donât believe in AI X-risk*, so you think itâs an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/âfeminist/âsocialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/âbetter as a cause area than standard near-term EA stuff or biorisk, canât take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what itâs worth, though Iâve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. Iâll freely confess that I am on the ânear term riskâ team, but that doesnât mean the two groups canât work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You havenât actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on âshort-term harmsâ? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on âshort-term harmsâ that we buy sympathy with the group of people concerned about âshort-term harmsâ, so we can later pass regulations together with them to reduce both âshort-term harmâ and AI x-risk.
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with).
And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
The relevant question isnât âare the important harms to be prioritised the existential harms or the non-existential ones?â, âwill AI be agents or not?â, nor âwill AI be stochastic parrots or superintelligence?â Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky.
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
âThe way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.â
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it wonât move the dial that much (and also is potentially lying, depending on context and your own opinions; we canât just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if youâre advocating spending actual EA money and labour on this, surely youâd first need to make a case that stuff âdealing with the short term harms of AIâ is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, donât believe in AI X-risk*, so you think itâs an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/âfeminist/âsocialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/âbetter as a cause area than standard near-term EA stuff or biorisk, canât take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what itâs worth, though Iâve ended up working on it anyway.
This seems a little zero-sum, which is not how successful social movements tend to operate. Iâll freely confess that I am on the ânear term riskâ team, but that doesnât mean the two groups canât work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You havenât actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on âshort-term harmsâ? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on âshort-term harmsâ that we buy sympathy with the group of people concerned about âshort-term harmsâ, so we can later pass regulations together with them to reduce both âshort-term harmâ and AI x-risk.
https://ââforum.effectivealtruism.org/ââposts/ââQ4rg6vwbtPxXW6ECj/ââwe-are-fighting-a-shared-battle-a-call-for-a-different (Itâs been a while since I read this so Iâm not sure it is what you are looking for, but Gideon Futerman had some ideas for what âbridge buildingâ might look like.)
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.