I just wanted to mention the possibility of so-called suffering risks or s-risks, which IMO should loom large in our trying to meaningfully assess the expected value of the future. (Although, even if the future is negative on some assessment, it may still be better to avert x-risks to preserve intelligence and promote compassion for intense suffering in the expectation that the intelligence will guard against suffering that would re-emerge in the absence of the intelligence (the way it “emerged” in the past).)
I just wanted to mention the possibility of so-called suffering risks or s-risks, which IMO should loom large in our trying to meaningfully assess the expected value of the future. (Although, even if the future is negative on some assessment, it may still be better to avert x-risks to preserve intelligence and promote compassion for intense suffering in the expectation that the intelligence will guard against suffering that would re-emerge in the absence of the intelligence (the way it “emerged” in the past).)
Yes, s-risks are definitely an important concept there! I mention them only at 7. but not because I thought they weren’t important :)