I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartially and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
I’m glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you’re taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole.
As for the points in your comment, there’s a lot of good stuff here. I think a post about the NRRC, or even an insider’s view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making ‘right-tail recommendations’ when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, their just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)
I think one thing where we agree is that there’s a need to ask and answer a lot more questions, some of which you mention here (beyond ‘is AIXR valid’):
What policy options do we have to counteract AIXR if true?
How do the effectiveness of these policy options change as we change our estimation of the risk?
What is the median view in the AIXR/broader EA/broader AI communities on risk?
And so on.
Some people in EA might write this off as ‘optics’, but I think that’s wrong