FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that there’s at least a little more of this bias than of an “opposite” bias, but not necessarily likely—just plausible—that there’s substantially more of that bias than of the “opposite” bias.)
Also, on reflection, I think I was wrong to say “I don’t think the post you link to provides good evidence [for your claim].” I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that don’t seem worth noting at all, really.
And to put my own cards on the table on some related points:
I’d personally like the longtermist community to have a bit of a marginal shift towards less conflation of “existential risk” (or the arguments for existential risk reduction) with “extinction risk”, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think we’re already moving in the right direction on these fronts, and that we’re already in a fairly ok place
FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that there’s at least a little more of this bias than of an “opposite” bias, but not necessarily likely—just plausible—that there’s substantially more of that bias than of the “opposite” bias.)
Also, on reflection, I think I was wrong to say “I don’t think the post you link to provides good evidence [for your claim].” I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that don’t seem worth noting at all, really.
And to put my own cards on the table on some related points:
I’d personally like the longtermist community to have a bit of a marginal shift towards less conflation of “existential risk” (or the arguments for existential risk reduction) with “extinction risk”, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think we’re already moving in the right direction on these fronts, and that we’re already in a fairly ok place