Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, ânet negative in expectationâ is compatible with âprobably mostly harmlessâ. I.e. the expected value of X can be very negative, even while the chance of the claim âX did (actual not expected) harmâ turning out to be true is low. If you donât really buy the arguments for AI X-risk but you do buy the argument for âvery small increases in X-risk are really badâ you might think that. On some days, I think I think that, though my views on all this arenât very stable.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, ânet negative in expectationâ is compatible with âprobably mostly harmlessâ. I.e. the expected value of X can be very negative, even while the chance of the claim âX did (actual not expected) harmâ turning out to be true is low. If you donât really buy the arguments for AI X-risk but you do buy the argument for âvery small increases in X-risk are really badâ you might think that. On some days, I think I think that, though my views on all this arenât very stable.