“The default framing for reducing existential risk is something like this. “Currently, humans have control over what we want, but there’s a risk that we would lose this control”
Can you perhaps point to some examples?
To me it seems that the default framing is often focused on extinction risks, and then non-extinction existential risks are mentioned as a sort of secondary case. Under this framing you’re not really mentioning the issue of control, but are rather mostly focusing on the distinction between survival and extinction.
Maybe you had specific writings (focusing on AI risk?) in mind though?
Good points. I should have written that the point about control is implicit. The default framing focuses on risks, as you say, not on making something happen that gives us more control than we currently have. I think there’s a natural reading of the existential risk framings that implicitly says something like “current levels of control might be adequate if it weren’t for destructive risks” or perhaps “there’s a trend where control increases by default and things might go well unless some risk comes about.” To be clear, that’s by no means a necessary implication of any text on existential risks. It’s just something that is under-discussed, and the lack of discussion suggests that some people might think that way.
“The default framing for reducing existential risk is something like this. “Currently, humans have control over what we want, but there’s a risk that we would lose this control”
Can you perhaps point to some examples?
To me it seems that the default framing is often focused on extinction risks, and then non-extinction existential risks are mentioned as a sort of secondary case. Under this framing you’re not really mentioning the issue of control, but are rather mostly focusing on the distinction between survival and extinction.
Maybe you had specific writings (focusing on AI risk?) in mind though?
Good points. I should have written that the point about control is implicit. The default framing focuses on risks, as you say, not on making something happen that gives us more control than we currently have. I think there’s a natural reading of the existential risk framings that implicitly says something like “current levels of control might be adequate if it weren’t for destructive risks” or perhaps “there’s a trend where control increases by default and things might go well unless some risk comes about.” To be clear, that’s by no means a necessary implication of any text on existential risks. It’s just something that is under-discussed, and the lack of discussion suggests that some people might think that way.
The second part of my comment here is relevant for this thread’s theme – it explains my position a bit better.