This seems a little zero-sum, which is not how successful social movements tend to operate. I’ll freely confess that I am on the “near term risk” team, but that doesn’t mean the two groups can’t work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You haven’t actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on “short-term harms”? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on “short-term harms” that we buy sympathy with the group of people concerned about “short-term harms”, so we can later pass regulations together with them to reduce both “short-term harm” and AI x-risk.
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with).
And I imagine it will be quite hard to convince most AI xrisk people of “whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; ”. If we were to adopt Gideon’s desired framing, it looks like we would need to make sacrifices in epistemics. Related:
The relevant question isn’t “are the important harms to be prioritised the existential harms or the non-existential ones?”, “will AI be agents or not?’, nor ’will AI be stochastic parrots or superintelligence?” Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky.
Some of Gideon’s suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
This seems a little zero-sum, which is not how successful social movements tend to operate. I’ll freely confess that I am on the “near term risk” team, but that doesn’t mean the two groups can’t work together.
A simplified example: Say 30% of a council are concerned abut near term harms, and 30% are concerned about x-risk, and each wants policies passed that address their own concerns. If the two spend all their time shitting on each other for having misplaced priorities, neither of them will get what they want. But if they work together, they have a majority, and can pass a combined bill that addresses both near-term harm and AI x-risk, benefiting both.
Unfortunately, the best time to do this bridge building and alliance making was several years ago, and the distrust is already rather entrenched. But I genuinely think that working to mend those bridges will make both groups better off in the long run.
You haven’t actually addressed the main question of the previous comment: What would this bridge building look like? Your council example does not match the current reality very well.
It feels like you also sidestep other stuff in the comment, and it is unclear what your position is. Should we spend EA money (or other resources) on “short-term harms”? If yes, is the main reason because funding the marginal AI ethics research is better than the marginal bed-net and the marginal AI xrisk research? Or would the main reason for spending money on “short-term harms” that we buy sympathy with the group of people concerned about “short-term harms”, so we can later pass regulations together with them to reduce both “short-term harm” and AI x-risk.
https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different (It’s been a while since I read this so I’m not sure it is what you are looking for, but Gideon Futerman had some ideas for what “bridge building” might look like.)
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of “whether AI is closer to a stupid ‘stochastic parrot’ or on the ‘verge-of-superintelligence’ doesn’t really matter; ”. If we were to adopt Gideon’s desired framing, it looks like we would need to make sacrifices in epistemics. Related:
Some of Gideon’s suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.