I often see people talking past each other when discussing x-risks because the definition[1] covers outcomes that are distinct in some worldviews. For some, humanity failing to reach its full potential and humanity going extinct are joint concerns, but for others they are separate outcomes. Is there a good solution to this?
“An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” (source)
I propose “positive and negative longtermism”, so something to do with reaching full potential would all be positive longtermism and mere extinction protection is negative longtermism.
I often see people talking past each other when discussing x-risks because the definition[1] covers outcomes that are distinct in some worldviews. For some, humanity failing to reach its full potential and humanity going extinct are joint concerns, but for others they are separate outcomes. Is there a good solution to this?
“An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” (source)
I propose “positive and negative longtermism”, so something to do with reaching full potential would all be positive longtermism and mere extinction protection is negative longtermism.