Only when humanity is both able and motivated to significantly change the course of the future do we have option value. However, suppose that our descendants both have the ability and the motivation to affect the future for the good of everyone, such that a future version of humanity is wise enough to recognize when the expected value of the future is
negative and coordinated and powerful enough to go extinct or make other significant changes. As other authors have raised (Brauner & Grosse-Holz, 2018), given such a state of affairs it seems unlikely that the future would be bad! After all, humanity would be wise, powerful, and coordinated. Most of the bad futures we are worried about do not follow from such a version of humanity, but from a version that is powerful but unwise and/or uncoordinated.
To be clear, there would be a small amount of option value. There could be some fringe cases in which a wise and powerful future version of humanity would have good reason to expect the future to be better if they went extinct, and be able to do so. Or perhaps
it would be possible for a small group of dedicated, altruistic agents to bring humanity to extinction, without risking even worse outcomes. At the same time they would need to be unable to improve humanity’s trajectory significantly in any other way for extinction to be their highest priority. Furthermore, leaving open this option also works the other way
around: a small group of ambitious individuals could make humanity go extinct if the future looks overwhelmingly positive.
Never got around to putting that excerpt on the forum
Good to see this point made on the forum! I discuss this as well in my 2019 MA Philosophy thesis (based off similar sources): http://www.sieberozendal.com/wp-content/uploads/2020/01/Rozendal-S.T.-2019-Uncertainty-About-the-Expected-Moral-Value-of-the-Long-Term-Future.-MA-Thesis.pdf
Never got around to putting that excerpt on the forum