In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless we’re getting into “is xrisk bad given the current state of the world” arguments which become a lot stronger if there’s no safe AI utopia at the end of the tunnel. Either way it seems like it’s not irrelevant
In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless we’re getting into “is xrisk bad given the current state of the world” arguments which become a lot stronger if there’s no safe AI utopia at the end of the tunnel. Either way it seems like it’s not irrelevant