FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that thereās at least a little more of this bias than of an āoppositeā bias, but not necessarily likelyājust plausibleāthat thereās substantially more of that bias than of the āoppositeā bias.)
Also, on reflection, I think I was wrong to say āI donāt think the post you link to provides good evidence [for your claim].ā I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that donāt seem worth noting at all, really.
And to put my own cards on the table on some related points:
Iād personally like the longtermist community to have a bit of a marginal shift towards less conflation of āexistential riskā (or the arguments for existential risk reduction) with āextinction riskā, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think weāre already moving in the right direction on these fronts, and that weāre already in a fairly ok place
FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that thereās at least a little more of this bias than of an āoppositeā bias, but not necessarily likelyājust plausibleāthat thereās substantially more of that bias than of the āoppositeā bias.)
Also, on reflection, I think I was wrong to say āI donāt think the post you link to provides good evidence [for your claim].ā I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that donāt seem worth noting at all, really.
And to put my own cards on the table on some related points:
Iād personally like the longtermist community to have a bit of a marginal shift towards less conflation of āexistential riskā (or the arguments for existential risk reduction) with āextinction riskā, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think weāre already moving in the right direction on these fronts, and that weāre already in a fairly ok place