It probably won’t. In my opinion, the “single, overarching goal to be maximised at all costs” is an outdated concept, based on speculation made before neural networks and the like became the norm.
Nostalgebraist asked a similar question to yours a while back, and makes a good argument that it won’t be. I put my thoughts as to why fixed goal maximization is unlikely into my own post here.
As others pointed out, it doesn’t need to have a fixed goal to be dangerous. But I think lacking such a goal does make a few nightmare scenarios significantly less likely, and overall decreases the likelihood of apocalypse.
It probably won’t. In my opinion, the “single, overarching goal to be maximised at all costs” is an outdated concept, based on speculation made before neural networks and the like became the norm.
Nostalgebraist asked a similar question to yours a while back, and makes a good argument that it won’t be. I put my thoughts as to why fixed goal maximization is unlikely into my own post here.
As others pointed out, it doesn’t need to have a fixed goal to be dangerous. But I think lacking such a goal does make a few nightmare scenarios significantly less likely, and overall decreases the likelihood of apocalypse.