Given how many of the frontier AI labs have an EA-related origin story, I think it’s totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.
Given how many of the frontier AI labs have an EA-related origin story, I think it’s totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.