How to engage with AI 4 Social Justice actors

I have noted an increased hostility from many researchers/​commentators from what I may describe (there may be better descriptions out there) as the AI 4 social justice community towards EA and longtermism.

A14SJ, in my opinion, view the risks of AI through the lens of structural power. This power imbalance over who owns emerging tech, who builds it, and who the tech is trained on, manifests in issues such as algorithmic bias and reduced privacy/​human agency for minority groups. People in the A14SJ group include Timnit Gebru, Kate Crawford, Karen Hao and Margaret Mitchell.

At times, it seems like AI4SJ should be natural allies of EA. They do great work on alternative LLMs, and worry deeply about the social implications that poorly governed AI could lead to. Personally, I think that Gebru and Mitchell should be given as much support as possible, given that their unfair dismissal from Google’s AI Ethics team is a signal of the challenges that come with a unaccountable concentration of technological powers to a small number of firms.

Yet Gebru recently Tweeted about “longtermism and effective altruism bullshit”, and said Silicon Valley EA types are “convincing themselves that the way in which they’re exploiting people and causing harm is the best possible thing they can be doing in the world”.

Part of this is probably down to the core thesis of the infamous Phil Torres critique of longtermism; that it “ignores structural injustice today and doesn’t value the developing world”. Algorithmic bias today is not the same as x-risk from unaligned AI in 30 years. But surely there is enough in common that both communities can work on?

How do people view the debate between these different groups, and what is the best way of engaging/​working together to try and create progress against risk from AI/​emerging tech?