In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless weâre getting into âis xrisk bad given the current state of the worldâ arguments which become a lot stronger if thereâs no safe AI utopia at the end of the tunnel. Either way it seems like itâs not irrelevant
In the scenario where AGI wonât be malevolent, slowing progress is bad.
In the scenario where AGI will be malevolent but we can fix that with more research, slowing progress is very good.
In the scenario where AGI will be malevolent and research canât do anything about that, itâs irrelevant.
Whatâs the hedge position?
In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless weâre getting into âis xrisk bad given the current state of the worldâ arguments which become a lot stronger if thereâs no safe AI utopia at the end of the tunnel. Either way it seems like itâs not irrelevant