I think another genuine issue for longtermism is complex cluelessness/deep uncertainty and moral uncertainty, although it’s not specific to longtermism. Even if you identify an intervention that you think has predictably large effects on the far future, you may not be able to weigh the arguments and evidence in such a way as to decide that it’s actually net positive in expectation.
It’s easy to forget the possibility that you’ll do more harm than good, or give it too little weight, and I suspect this is worse when we get into very small probabilities of making a difference, and especially Pascalian cases, since we’re especially bad at estimating such (differences in) probabilities and considering many very small (differences in) probabilities.
There seems to be some bias at some influential EA orgs against writing about the idea that the far future could be bad (or worse conditional on the survival of humanity or our descendants), which can lead us to systematically underestimating the risks of backfire in this way or related ways. There are other ways seemingly good interventions can backfire, e.g. the research we publish could be used for harm (even if they they’re actually doing good according to their own views!), or some AI safety work could be accelerating the development (and adoption) of AGI. Without good feedback loops, such biases and blindspots can persist more easily, and shortermist interventions tend to have better feedback loops than longtermist ones.
There seems to be some bias at some influential EA orgs against writing about the idea that the far future could be bad (or worse conditional on the survival of humanity or our descendants), which can lead us to systematically underestimating the risks of backfire in this way or related ways.
I think that the claim you make it plausible, but I don’t think the post you link to provides good evidence of it. If readers were going to read and update on that post, I’d encourage them to also read the commentary on it here. (I read the post myself and found it very unconvincing and strange.)
I think the guidelines and previous syllabi/reading lists are/were biased against downside-focused views, practically pessimistic views, and views other than total symmetric and classical utilitarianism (which are used most to defend work against extinction) in general, as discussed in the corresponding sections of the post. This is both on the normative ethics side and discussion of how the future could be bad or extinction could be good. I discussed CLR’s guidelines with Jonas Vollmer here. CLR’s guidelines are here, and the guidelines endorsed by 80,000 Hours, CEA, CFAR, MIRI, Open Phil and particular influential EAs are here. (I don’t know if these are current.)
On the normative ethics side, CLR is expected to discuss moral uncertainty and non-asymmetric views in particular to undermine asymmetric views, and while the other side is expected to discuss moral uncertainty and s-risks, they are not expected to discuss asymmetric views in particular, so this biases us away from asymmetric views, according to which the future may be bad and extinction may be good.
On discussion of how the future could be bad or extinction could be good, from CLR’s guidelines:
Minimize the risk of readers coming away contemplating causing extinction, i.e., consider discussing practical ways to reduce s-risks instead of saying how the future could be bad
(...)
In general, we recommend writing about practical ways to reduce s-risk without mentioning how the future could be bad overall. We believe this will likely have similar positive results with fewer downsides because there are already many articles on theoretical questions.
(emphasis mine)
So, CLR associates are discouraged from arguing that the future could be bad and extinction could be good, biasing us against theses hypotheses.
I’m not sure that the guidelines for CLR are actually bad overall, though, since I think the arguments for them are plausible, and I agree that people with pessimistic or downside-focused views should not seek to cause extinction, except possibly through civil discussion and outreach causing people to deprioritize work on preventing extinction. But the guidelines rule out ways of doing the latter, too.
I have my own (small) personal example related to normative ethics, too. The coverage of the asymmetry on this page, featured on 80,000 Hours’ Key Ideas page, is pretty bad:
One issue with this is that it’s unclear why this asymmetry would exist.
The article does not cite any literature making positive cases for the asymmetry (although they discuss the repugnant conclusion as being a reason for person-affecting views). I cite some in this thread.
The bigger problem though is that this asymmetry conflicts with another common sense idea.
Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life, but if creating a happy life is neither good or bad, then we have to conclude that both options are neither good nor bad. This implies both options are equally good, which seems bizarre.
There are asymmetric views to which this argument does not apply, some published well before this page, e.g. this and this. Also, the conclusion may not be so bizarre if the lives are equally content/satisfied, in line with negative accounts of welfare (tranquilism/Buddhist axiology, antifrustrationism, negative utilitarianism, etc.).
Over a year ago, I criticized this for being unfair in the comments section of that page, linking to comments in my own EA Forum shortform and other literature with arguments for the asymmetry, and someone strong downvoted the comments in my shortform with a downvote strength of 7 and without any explanation. There was also already another comment criticizing the discussion of the asymmetry.
FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that there’s at least a little more of this bias than of an “opposite” bias, but not necessarily likely—just plausible—that there’s substantially more of that bias than of the “opposite” bias.)
Also, on reflection, I think I was wrong to say “I don’t think the post you link to provides good evidence [for your claim].” I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that don’t seem worth noting at all, really.
And to put my own cards on the table on some related points:
I’d personally like the longtermist community to have a bit of a marginal shift towards less conflation of “existential risk” (or the arguments for existential risk reduction) with “extinction risk”, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think we’re already moving in the right direction on these fronts, and that we’re already in a fairly ok place
From what I’ve read moral uncertainty tends to work in favour of longtermists, provided you’re happy to do something like maximising expected choice-worthiness. E.g. see here for moral uncertainty about population axiology implying we should choose options preferred by total utilitarianism (disclaimer—I’ve only read the abstract!). If Greaves and MacAskill’s claim about the robustness of longtermism to different moral views is fair, it seems longtermism should remain fairly robust in the face of moral uncertainty.
In terms of complex cluelessness in a more empirical sense, I admit I haven’t properly considered the possibility that something like “researching AI alignment” may have realistic downsides. I do however find it a tougher sell that we’re complexly clueless about working on AI alignment in the same way that we are about giving to AMF.
I think another genuine issue for longtermism is complex cluelessness/deep uncertainty and moral uncertainty, although it’s not specific to longtermism. Even if you identify an intervention that you think has predictably large effects on the far future, you may not be able to weigh the arguments and evidence in such a way as to decide that it’s actually net positive in expectation.
It’s easy to forget the possibility that you’ll do more harm than good, or give it too little weight, and I suspect this is worse when we get into very small probabilities of making a difference, and especially Pascalian cases, since we’re especially bad at estimating such (differences in) probabilities and considering many very small (differences in) probabilities.
There seems to be some bias at some influential EA orgs against writing about the idea that the far future could be bad (or worse conditional on the survival of humanity or our descendants), which can lead us to systematically underestimating the risks of backfire in this way or related ways. There are other ways seemingly good interventions can backfire, e.g. the research we publish could be used for harm (even if they they’re actually doing good according to their own views!), or some AI safety work could be accelerating the development (and adoption) of AGI. Without good feedback loops, such biases and blindspots can persist more easily, and shortermist interventions tend to have better feedback loops than longtermist ones.
I think that the claim you make it plausible, but I don’t think the post you link to provides good evidence of it. If readers were going to read and update on that post, I’d encourage them to also read the commentary on it here. (I read the post myself and found it very unconvincing and strange.)
I think the guidelines and previous syllabi/reading lists are/were biased against downside-focused views, practically pessimistic views, and views other than total symmetric and classical utilitarianism (which are used most to defend work against extinction) in general, as discussed in the corresponding sections of the post. This is both on the normative ethics side and discussion of how the future could be bad or extinction could be good. I discussed CLR’s guidelines with Jonas Vollmer here. CLR’s guidelines are here, and the guidelines endorsed by 80,000 Hours, CEA, CFAR, MIRI, Open Phil and particular influential EAs are here. (I don’t know if these are current.)
On the normative ethics side, CLR is expected to discuss moral uncertainty and non-asymmetric views in particular to undermine asymmetric views, and while the other side is expected to discuss moral uncertainty and s-risks, they are not expected to discuss asymmetric views in particular, so this biases us away from asymmetric views, according to which the future may be bad and extinction may be good.
On discussion of how the future could be bad or extinction could be good, from CLR’s guidelines:
(emphasis mine)
So, CLR associates are discouraged from arguing that the future could be bad and extinction could be good, biasing us against theses hypotheses.
I’m not sure that the guidelines for CLR are actually bad overall, though, since I think the arguments for them are plausible, and I agree that people with pessimistic or downside-focused views should not seek to cause extinction, except possibly through civil discussion and outreach causing people to deprioritize work on preventing extinction. But the guidelines rule out ways of doing the latter, too.
I have my own (small) personal example related to normative ethics, too. The coverage of the asymmetry on this page, featured on 80,000 Hours’ Key Ideas page, is pretty bad:
The article does not cite any literature making positive cases for the asymmetry (although they discuss the repugnant conclusion as being a reason for person-affecting views). I cite some in this thread.
There are asymmetric views to which this argument does not apply, some published well before this page, e.g. this and this. Also, the conclusion may not be so bizarre if the lives are equally content/satisfied, in line with negative accounts of welfare (tranquilism/Buddhist axiology, antifrustrationism, negative utilitarianism, etc.).
Over a year ago, I criticized this for being unfair in the comments section of that page, linking to comments in my own EA Forum shortform and other literature with arguments for the asymmetry, and someone strong downvoted the comments in my shortform with a downvote strength of 7 and without any explanation. There was also already another comment criticizing the discussion of the asymmetry.
FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible.
(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that there’s at least a little more of this bias than of an “opposite” bias, but not necessarily likely—just plausible—that there’s substantially more of that bias than of the “opposite” bias.)
Also, on reflection, I think I was wrong to say “I don’t think the post you link to provides good evidence [for your claim].” I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that don’t seem worth noting at all, really.
And to put my own cards on the table on some related points:
I’d personally like the longtermist community to have a bit of a marginal shift towards less conflation of “existential risk” (or the arguments for existential risk reduction) with “extinction risk”, more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
But I also think we’re already moving in the right direction on these fronts, and that we’re already in a fairly ok place
From what I’ve read moral uncertainty tends to work in favour of longtermists, provided you’re happy to do something like maximising expected choice-worthiness. E.g. see here for moral uncertainty about population axiology implying we should choose options preferred by total utilitarianism (disclaimer—I’ve only read the abstract!). If Greaves and MacAskill’s claim about the robustness of longtermism to different moral views is fair, it seems longtermism should remain fairly robust in the face of moral uncertainty.
In terms of complex cluelessness in a more empirical sense, I admit I haven’t properly considered the possibility that something like “researching AI alignment” may have realistic downsides. I do however find it a tougher sell that we’re complexly clueless about working on AI alignment in the same way that we are about giving to AMF.