Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.
I do think your “very unlikely that [human descendants] would see value exactly where we see disvalue” argument is a viable one, but I think it’s just one of many considerations, and my current impression of the evidence is that it’s outweighed.
Also FYI the link in your article to “moral circle expansion” is dead. We work on that approach at Sentience Institute if you’re interested.
I have seen and read your post. It was published after my internal “Oh my god, I really, really need to stop reading and integrating even more sources, the article is already way too long”-deadline, so I don’t refer to it in the article.
In general, I am more confident about the expected value of extinction risk reduction being positive, than about extinction risk reduction actually being the best thing to work on. It might well be that e.g. moral circle expansion is more promising, even if we have good reasons to believe that extinction risk reduction is positive.
I do think your “very unlikely that [human descendants] would see value exactly where we see disvalue” argument is a viable one, but I think it’s just one of many considerations, and my current impression of the evidence is that it’s outweighed.
I personally don’t think that this argument is very strong on its own. But I think there are additional strong arguments (in descending order of relevance):
“The universe might already be filled with suffering and post-humans might do something against it.”
“Global catastrophes, that don’t lead to extinction, might have negative long-term effects”
“Other non-human animal civilizations might be worse”
Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the “very unlikely that [human descendants] would see value exactly where we see disvalue” argument (I’d call this the ‘will argument,’ that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very heavy focus on reducing extinction risk, without exploration of those many other arguments. I worry that much of the Oxford/SF-based EA community has committed hard to reducing extinction risk without exploring those other arguments.
It’d be great if at some point you could write up discussion of those other arguments, since I think that’s where the thrust of the disagreement is between people who think the far future is highly positive, close to zero, and highly negative. Though unfortunately, it always ends up coming down to highly intuitive judgment calls on these macro-socio-technological questions. As I mentioned in that post, my guess is that long-term empirical study like the research in The Age of Em or done at Sentience Institute is our best way of improving those highly intuitive judgment calls and finally reaching agreement on the topic.
The final paragraphs of each sections usually contain discussion of how relevant I think each argument is. All these sections also have some quantitative EV-estimates (linked or in the footnotes).
But you probably saw that, since it is also explained in the abstract. So I am not sure what you mean when you say:
It’d be great if at some point you could write up discussion of those other arguments,
Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.
I do think your “very unlikely that [human descendants] would see value exactly where we see disvalue” argument is a viable one, but I think it’s just one of many considerations, and my current impression of the evidence is that it’s outweighed.
Also FYI the link in your article to “moral circle expansion” is dead. We work on that approach at Sentience Institute if you’re interested.
Hey Jacy,
I have seen and read your post. It was published after my internal “Oh my god, I really, really need to stop reading and integrating even more sources, the article is already way too long”-deadline, so I don’t refer to it in the article.
In general, I am more confident about the expected value of extinction risk reduction being positive, than about extinction risk reduction actually being the best thing to work on. It might well be that e.g. moral circle expansion is more promising, even if we have good reasons to believe that extinction risk reduction is positive.
I personally don’t think that this argument is very strong on its own. But I think there are additional strong arguments (in descending order of relevance):
“The universe might already be filled with suffering and post-humans might do something against it.”
“Global catastrophes, that don’t lead to extinction, might have negative long-term effects”
“Other non-human animal civilizations might be worse”
...
Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the “very unlikely that [human descendants] would see value exactly where we see disvalue” argument (I’d call this the ‘will argument,’ that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very heavy focus on reducing extinction risk, without exploration of those many other arguments. I worry that much of the Oxford/SF-based EA community has committed hard to reducing extinction risk without exploring those other arguments.
It’d be great if at some point you could write up discussion of those other arguments, since I think that’s where the thrust of the disagreement is between people who think the far future is highly positive, close to zero, and highly negative. Though unfortunately, it always ends up coming down to highly intuitive judgment calls on these macro-socio-technological questions. As I mentioned in that post, my guess is that long-term empirical study like the research in The Age of Em or done at Sentience Institute is our best way of improving those highly intuitive judgment calls and finally reaching agreement on the topic.
Hey Jacy,
I have written up my thoughts on all these points in the article. Here are the links.
“The universe might already be filled with suffering and post-humans might do something against it.”
Part 2.2
“Global catastrophes, that don’t lead to extinction, might have negative long-term effects”
Part 3
“Other non-human animal civilizations might be worse
Part 2.1
The final paragraphs of each sections usually contain discussion of how relevant I think each argument is. All these sections also have some quantitative EV-estimates (linked or in the footnotes).
But you probably saw that, since it is also explained in the abstract. So I am not sure what you mean when you say:
Are we talking about the same arguments?
Oh, sorry, I was thinking of the arguments in my post, not (only) those in your post. I should have been more precise in my wording.