For what it’s worth, my model of a path to safe AI looks like a narrow winding path along a ridge with deadly falls to either side:
Unfortunately, the deadly falls to either side have illusions projected onto them of shortcuts to power, wealth, and utility. I don’t think there is any path which goes to safety without a long ways of immediate danger nearby. In this model, deliberately consistently optimizing for safety above all else during the dangerous stretch is the only way to make it through.
The danger zone is where the model is sufficiently powerful and agentic enough that a greedy shortsighted person could say to it, “Here is access to the internet. Make me lots of money.” and this would result in a large stream of money pouring into their account. I think we’re only a few years away from that point, and that the actions that safety researchers take in the meantime aren’t going to change that. So, we need both safety research and governance, and carefully selecting disproportionately safety-accelerating research would be entirely irrelevant to the strategic landscape.
This is just my view, and I may be wrong, but I think it’s worth pointing out that there’s a chance that the idea of trying to do disproportionately safety-accelerating research is a distraction from strategically relevant action.
For what it’s worth, my model of a path to safe AI looks like a narrow winding path along a ridge with deadly falls to either side:
Unfortunately, the deadly falls to either side have illusions projected onto them of shortcuts to power, wealth, and utility. I don’t think there is any path which goes to safety without a long ways of immediate danger nearby. In this model, deliberately consistently optimizing for safety above all else during the dangerous stretch is the only way to make it through.
The danger zone is where the model is sufficiently powerful and agentic enough that a greedy shortsighted person could say to it, “Here is access to the internet. Make me lots of money.” and this would result in a large stream of money pouring into their account. I think we’re only a few years away from that point, and that the actions that safety researchers take in the meantime aren’t going to change that. So, we need both safety research and governance, and carefully selecting disproportionately safety-accelerating research would be entirely irrelevant to the strategic landscape.
This is just my view, and I may be wrong, but I think it’s worth pointing out that there’s a chance that the idea of trying to do disproportionately safety-accelerating research is a distraction from strategically relevant action.