One thing I find meta-interesting about s-risk is that s-risk is included in the sort of thing we were pointing at in the late 90s before we started talking about x-risk, and so to my mind s-risk has always been part of the x-risk mitigation program but, as you make clear, that’s not how it’s been communicated.
I wonder if there are types of risks for the long-term future we implicitly would like to avoid but have accidentally explicitly excluded from both x-risk and s-risk definitions.
My memory is somewhat fuzzy here because it was almost 20 years ago, but I seem to recall discussions on Extropians about far future “bad” outcomes. In those early days much of the discussion focused around salient outcomes like “robots wipe out humans” that we picked up from fiction or outcomes that let people grind their particular axes (capitalist dystopian future! ecoterrorist dystopian future! ___ dystopian future!), but there was definitely more serious focus around some particular issues.
I remember we worried a lot about grey goo, AIs, extraterrestrial aliens, pandemics, nuclear weapons, etc. A lot of it was focused on getting wiped out (existential threats), but some of it was about undesirable outcomes we wouldn’t want to live in. Some of this was about s-risks I’m sure, but I feel like a lot of it was really more about worries over value drift.
I’m not sure there’s much else there, though. We knew bad outcomes were possible, but we were mostly optimistic and hadn’t developed anything like the risk-avoidance mindset that’s become relatively more prevalent today.
One thing I find meta-interesting about s-risk is that s-risk is included in the sort of thing we were pointing at in the late 90s before we started talking about x-risk, and so to my mind s-risk has always been part of the x-risk mitigation program but, as you make clear, that’s not how it’s been communicated.
I wonder if there are types of risks for the long-term future we implicitly would like to avoid but have accidentally explicitly excluded from both x-risk and s-risk definitions.
I’d be interested to hear more about that if you want to take the time.
My memory is somewhat fuzzy here because it was almost 20 years ago, but I seem to recall discussions on Extropians about far future “bad” outcomes. In those early days much of the discussion focused around salient outcomes like “robots wipe out humans” that we picked up from fiction or outcomes that let people grind their particular axes (capitalist dystopian future! ecoterrorist dystopian future! ___ dystopian future!), but there was definitely more serious focus around some particular issues.
I remember we worried a lot about grey goo, AIs, extraterrestrial aliens, pandemics, nuclear weapons, etc. A lot of it was focused on getting wiped out (existential threats), but some of it was about undesirable outcomes we wouldn’t want to live in. Some of this was about s-risks I’m sure, but I feel like a lot of it was really more about worries over value drift.
I’m not sure there’s much else there, though. We knew bad outcomes were possible, but we were mostly optimistic and hadn’t developed anything like the risk-avoidance mindset that’s become relatively more prevalent today.