Under moral uncertainty, many moral perspectives care much more about averting downsides than producing upsides.
Additionally, tractability is probably higher for extinction-level threats, since they are “absorptive”; decreasing the chance we end up in one gives humanity and their descendants ability to do whatever they figure out is best.
Finally, there is a meaningful sense in which working on improving the future is plagued by questions about moral progress and lock-in of values, and my intuition is that most interventions that take moral progress seriously and try to avoid lock-in boil down to working on things that are fairly equivalent to avoiding extinction. Interventions that don’t take moral progress seriously instead may look like locking in current values.
The prediction of many moral perspectives caring more about averting downsides than producing upsides is well explained if we live in a moral relativist multiverse, where there are an infinity of correct moral systems, and which one you come to is path dependent and starting point dependent, but there exist instrumental goals from many moral perspectives that has a step that wants to avoid extinction/disempowerment, because it means that morality loses out in the competition/battle for survival/dominance.
cf @quinn’s positive vs negative longtermism framework:
Under moral uncertainty, many moral perspectives care much more about averting downsides than producing upsides.
Additionally, tractability is probably higher for extinction-level threats, since they are “absorptive”; decreasing the chance we end up in one gives humanity and their descendants ability to do whatever they figure out is best.
Finally, there is a meaningful sense in which working on improving the future is plagued by questions about moral progress and lock-in of values, and my intuition is that most interventions that take moral progress seriously and try to avoid lock-in boil down to working on things that are fairly equivalent to avoiding extinction. Interventions that don’t take moral progress seriously instead may look like locking in current values.
The prediction of many moral perspectives caring more about averting downsides than producing upsides is well explained if we live in a moral relativist multiverse, where there are an infinity of correct moral systems, and which one you come to is path dependent and starting point dependent, but there exist instrumental goals from many moral perspectives that has a step that wants to avoid extinction/disempowerment, because it means that morality loses out in the competition/battle for survival/dominance.
cf @quinn’s positive vs negative longtermism framework:
https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=pvXtqvGfjATkJq7N2