I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with).
And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
The relevant question isnât âare the important harms to be prioritised the existential harms or the non-existential ones?â, âwill AI be agents or not?â, nor âwill AI be stochastic parrots or superintelligence?â Rather, the relevant question is whether we think that power-accumulation and concentration in and through AI systems, at different scales of capability, is extremely risky.
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.
https://ââforum.effectivealtruism.org/ââposts/ââQ4rg6vwbtPxXW6ECj/ââwe-are-fighting-a-shared-battle-a-call-for-a-different (Itâs been a while since I read this so Iâm not sure it is what you are looking for, but Gideon Futerman had some ideas for what âbridge buildingâ might look like.)
I just read most of the article. It was not that satisfying in this context. Most of it is arguments that we should work together (which I dont disagree with). And I imagine it will be quite hard to convince most AI xrisk people of âwhether AI is closer to a stupid âstochastic parrotâ or on the âverge-of-superintelligenceâ doesnât really matter; â. If we were to adopt Gideonâs desired framing, it looks like we would need to make sacrifices in epistemics. Related:
Some of Gideonâs suggestion such as protest or compute governance are already being pursued. Not sure if that counts as bridge building though, because these might be good ideas anyways.