There seems to be some bias at some influential EA orgs against writing about the idea that the far future could be bad (or worse conditional on the survival of humanity or our descendants), which can lead us to systematically underestimating the risks of backfire in this way or related ways.
I think that the claim you make it plausible, but I donāt think the post you link to provides good evidence of it. If readers were going to read and update on that post, Iād encourage them to also read the commentary on it here. (I read the post myself and found it very unconvincing and strange.)
I think the guidelines and previous syllabi/āreading lists are/āwere biased against downside-focused views, practically pessimistic views, and views other than total symmetric and classical utilitarianism (which are used most to defend work against extinction) in general, as discussed in the corresponding sections of the post. This is both on the normative ethics side and discussion of how the future could be bad or extinction could be good. I discussed CLRās guidelines with Jonas Vollmer here. CLRās guidelines are here, and the guidelines endorsed by 80,000 Hours, CEA, CFAR, MIRI, Open Phil and particular influential EAs are here. (I donāt know if these are current.)
On the normative ethics side, CLR is expected to discuss moral uncertainty and non-asymmetric views in particular to undermine asymmetric views, and while the other side is expected to discuss moral uncertainty and s-risks, they are not expected to discuss asymmetric views in particular, so this biases us away from asymmetric views, according to which the future may be bad and extinction may be good.
On discussion of how the future could be bad or extinction could be good, from CLRās guidelines:
Minimize the risk of readers coming away contemplating causing extinction, i.e., consider discussing practical ways to reduce s-risks instead of saying how the future could be bad
(...)
In general, we recommend writing about practical ways to reduce s-risk without mentioning how the future could be bad overall. We believe this will likely have similar positive results with fewer downsides because there are already many articles on theoretical questions.
(emphasis mine)
So, CLR associates are discouraged from arguing that the future could be bad and extinction could be good, biasing us against theses hypotheses.
Iām not sure that the guidelines for CLR are actually bad overall, though, since I think the arguments for them are plausible, and I agree that people with pessimistic or downside-focused views should not seek to cause extinction, except possibly through civil discussion and outreach causing people to deprioritize work on preventing extinction. But the guidelines rule out ways of doing the latter, too.
I have my own (small) personal example related to normative ethics, too. The coverage of the asymmetry on this page, featured on 80,000 Hoursā Key Ideas page, is pretty bad:
One issue with this is that itās unclear why this asymmetry would exist.
The article does not cite any literature making positive cases for the asymmetry (although they discuss the repugnant conclusion as being a reason for person-affecting views). I cite some in this thread.
The bigger problem though is that this asymmetry conflicts with another common sense idea.
Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life, but if creating a happy life is neither good or bad, then we have to conclude that both options are neither good nor bad. This implies both options are equally good, which seems bizarre.
There are asymmetric views to which this argument does not apply, some published well before this page, e.g. this and this. Also, the conclusion may not be so bizarre if the lives are equally content/āsatisfied, in line with negative accounts of welfare (tranquilism/āBuddhist axiology, antifrustrationism, negative utilitarianism, etc.).
Over a year ago, I criticized this for being unfair in the comments section of that page, linking to comments in my own EA Forum shortform and other literature with arguments for the asymmetry, and someone strong downvoted the comments in my shortform with a downvote strength of 7 and without any explanation. There was also already another comment criticizing the discussion of the asymmetry.
FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that thereās at least a little more of this bias than of an āoppositeā bias, but not necessarily likelyājust plausibleāthat thereās substantially more of that bias than of the āoppositeā bias.)
Also, on reflection, I think I was wrong to say āI donāt think the post you link to provides good evidence [for your claim].ā I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that donāt seem worth noting at all, really.
And to put my own cards on the table on some related points:
Iād personally like the longtermist community to have a bit of a marginal shift towards less conflation of āexistential riskā (or the arguments for existential risk reduction) with āextinction riskā, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think weāre already moving in the right direction on these fronts, and that weāre already in a fairly ok place
I think that the claim you make it plausible, but I donāt think the post you link to provides good evidence of it. If readers were going to read and update on that post, Iād encourage them to also read the commentary on it here. (I read the post myself and found it very unconvincing and strange.)
I think the guidelines and previous syllabi/āreading lists are/āwere biased against downside-focused views, practically pessimistic views, and views other than total symmetric and classical utilitarianism (which are used most to defend work against extinction) in general, as discussed in the corresponding sections of the post. This is both on the normative ethics side and discussion of how the future could be bad or extinction could be good. I discussed CLRās guidelines with Jonas Vollmer here. CLRās guidelines are here, and the guidelines endorsed by 80,000 Hours, CEA, CFAR, MIRI, Open Phil and particular influential EAs are here. (I donāt know if these are current.)
On the normative ethics side, CLR is expected to discuss moral uncertainty and non-asymmetric views in particular to undermine asymmetric views, and while the other side is expected to discuss moral uncertainty and s-risks, they are not expected to discuss asymmetric views in particular, so this biases us away from asymmetric views, according to which the future may be bad and extinction may be good.
On discussion of how the future could be bad or extinction could be good, from CLRās guidelines:
(emphasis mine)
So, CLR associates are discouraged from arguing that the future could be bad and extinction could be good, biasing us against theses hypotheses.
Iām not sure that the guidelines for CLR are actually bad overall, though, since I think the arguments for them are plausible, and I agree that people with pessimistic or downside-focused views should not seek to cause extinction, except possibly through civil discussion and outreach causing people to deprioritize work on preventing extinction. But the guidelines rule out ways of doing the latter, too.
I have my own (small) personal example related to normative ethics, too. The coverage of the asymmetry on this page, featured on 80,000 Hoursā Key Ideas page, is pretty bad:
The article does not cite any literature making positive cases for the asymmetry (although they discuss the repugnant conclusion as being a reason for person-affecting views). I cite some in this thread.
There are asymmetric views to which this argument does not apply, some published well before this page, e.g. this and this. Also, the conclusion may not be so bizarre if the lives are equally content/āsatisfied, in line with negative accounts of welfare (tranquilism/āBuddhist axiology, antifrustrationism, negative utilitarianism, etc.).
Over a year ago, I criticized this for being unfair in the comments section of that page, linking to comments in my own EA Forum shortform and other literature with arguments for the asymmetry, and someone strong downvoted the comments in my shortform with a downvote strength of 7 and without any explanation. There was also already another comment criticizing the discussion of the asymmetry.
FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that thereās at least a little more of this bias than of an āoppositeā bias, but not necessarily likelyājust plausibleāthat thereās substantially more of that bias than of the āoppositeā bias.)
Also, on reflection, I think I was wrong to say āI donāt think the post you link to provides good evidence [for your claim].ā I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that donāt seem worth noting at all, really.
And to put my own cards on the table on some related points:
Iād personally like the longtermist community to have a bit of a marginal shift towards less conflation of āexistential riskā (or the arguments for existential risk reduction) with āextinction riskā, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think weāre already moving in the right direction on these fronts, and that weāre already in a fairly ok place