You mention S-risk. I tend to try not to think too much about this, but it needs to be considered in any EV estimate of working on AI Safety. I think appropriately factoring it in could be overwhelming in terms of concluding that preventing ASI from being built is the number 1 priority. Preventing space colonisation x-risk could be achieved by letting ASI extinct everything. But how likely is ASI-induced extinction over ASI-induced S-risk (ASI simulating, or physically creating, astronomical amounts of unimaginable suffering, on a scale far larger than human space colonisation could ever achieve)?
You mention S-risk. I tend to try not to think too much about this, but it needs to be considered in any EV estimate of working on AI Safety. I think appropriately factoring it in could be overwhelming in terms of concluding that preventing ASI from being built is the number 1 priority. Preventing space colonisation x-risk could be achieved by letting ASI extinct everything. But how likely is ASI-induced extinction over ASI-induced S-risk (ASI simulating, or physically creating, astronomical amounts of unimaginable suffering, on a scale far larger than human space colonisation could ever achieve)?