“enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory”
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though I’ve ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where it’s clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Given how many of the frontier AI labs have an EA-related origin story, I think it’s totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.
“enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory”
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though I’ve ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where it’s clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Given how many of the frontier AI labs have an EA-related origin story, I think it’s totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, “net negative in expectation” is compatible with “probably mostly harmless”. I.e. the expected value of X can be very negative, even while the chance of the claim “X did (actual not expected) harm” turning out to be true is low. If you don’t really buy the arguments for AI X-risk but you do buy the argument for “very small increases in X-risk are really bad” you might think that. On some days, I think I think that, though my views on all this aren’t very stable.