Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.