In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless we’re getting into “is xrisk bad given the current state of the world” arguments which become a lot stronger if there’s no safe AI utopia at the end of the tunnel. Either way it seems like it’s not irrelevant
In the scenario where AGI won’t be malevolent, slowing progress is bad.
In the scenario where AGI will be malevolent but we can fix that with more research, slowing progress is very good.
In the scenario where AGI will be malevolent and research can’t do anything about that, it’s irrelevant.
What’s the hedge position?
In the scenario where AGI would 100% be malevolent it seems like slowing progress is very good and all AIS people should pivot to slowing or stopping AI progress. Unless we’re getting into “is xrisk bad given the current state of the world” arguments which become a lot stronger if there’s no safe AI utopia at the end of the tunnel. Either way it seems like it’s not irrelevant