Thanks for these questions Oscar! To be clear, I was suggesting that effective messaging would emphasise the injustice of continued AI development in an emotionally compelling way: e.g. lack of democratic input to corporate attempts to build AGI. I wasn’t talking so much about communicating near-term injustices. Though, I take your point, that by allying with other groups suffering from near-term harms, this would imply a combined near-term and long-term message.
On your first question: would thinking about near-term & LT harms lead to worse thinking? Do you mean this would make us care about AI x-risk less?
And on your second point, on whether it would be perceived as manipulative. I don’t think so. If AI protest can effectively communicate a ‘We are fighting a shared battle’ message, as @Gideon Futerman has written about, this could make AI protests seem less niche/esoteric. Identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’, and could make AI risk more salient. In addition, broad ‘coalitions of the willing’ (i.e. Baptists and Bootleggers) are very common in politics. What do you think?
I suppose I meant something similar to what Chris has also written. I think being single-minded can be valuable. Hopefully it is possible to engage productively with non x-risk focused communities without being either deceptive or manipulative, I think it is doable, just requires some care I imagine.
Thanks for these questions Oscar! To be clear, I was suggesting that effective messaging would emphasise the injustice of continued AI development in an emotionally compelling way: e.g. lack of democratic input to corporate attempts to build AGI. I wasn’t talking so much about communicating near-term injustices. Though, I take your point, that by allying with other groups suffering from near-term harms, this would imply a combined near-term and long-term message.
On your first question: would thinking about near-term & LT harms lead to worse thinking? Do you mean this would make us care about AI x-risk less?
And on your second point, on whether it would be perceived as manipulative. I don’t think so. If AI protest can effectively communicate a ‘We are fighting a shared battle’ message, as @Gideon Futerman has written about, this could make AI protests seem less niche/esoteric. Identifying concrete examples of harms to specific people/groups is important part of ‘injustice frames’, and could make AI risk more salient. In addition, broad ‘coalitions of the willing’ (i.e. Baptists and Bootleggers) are very common in politics. What do you think?
I suppose I meant something similar to what Chris has also written. I think being single-minded can be valuable. Hopefully it is possible to engage productively with non x-risk focused communities without being either deceptive or manipulative, I think it is doable, just requires some care I imagine.