But in hard takeoff, failure to see massive success from narrow AIs could happen due to regulations and other barriers. It could just be limitations of the narrow AIs. In fact, these limitations could even point more forcefully to the massive benefits of an AI that can generalize.
I think you’re saying that regulations/norms could mask dangerous capability and development, having the effect of eroding credibility/recourses in safety. Yet, unhindered by enforcement, bad actors continue to progress to the worse states, even using the regulations as signposts.
I’m not fully sure I understand all of the sentences in the rest of your paragraph. There’s several jumps in there?
Gwern’s writing “Clippy” lays out some potential possibilities of dislocation of safety mechanisms. If there is additional content you think is convincing (of mechanisms and enforcement) that would be good to share.
I think you’re saying that regulations/norms could mask dangerous capability and development, having the effect of eroding credibility/recourses in safety. Yet, unhindered by enforcement, bad actors continue to progress to the worse states, even using the regulations as signposts.
I’m not fully sure I understand all of the sentences in the rest of your paragraph. There’s several jumps in there?
Gwern’s writing “Clippy” lays out some potential possibilities of dislocation of safety mechanisms. If there is additional content you think is convincing (of mechanisms and enforcement) that would be good to share.
You’re right, that paragraph was confusing. I just edited it to try and make it more clear.