âenough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territoryâ
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though Iâve ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where itâs clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Given how many of the frontier AI labs have an EA-related origin story, I think itâs totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, ânet negative in expectationâ is compatible with âprobably mostly harmlessâ. I.e. the expected value of X can be very negative, even while the chance of the claim âX did (actual not expected) harmâ turning out to be true is low. If you donât really buy the arguments for AI X-risk but you do buy the argument for âvery small increases in X-risk are really badâ you might think that. On some days, I think I think that, though my views on all this arenât very stable.
âenough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territoryâ
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though Iâve ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where itâs clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Given how many of the frontier AI labs have an EA-related origin story, I think itâs totally plausible that the EA AI xrisk project has been net negative.
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, ânet negative in expectationâ is compatible with âprobably mostly harmlessâ. I.e. the expected value of X can be very negative, even while the chance of the claim âX did (actual not expected) harmâ turning out to be true is low. If you donât really buy the arguments for AI X-risk but you do buy the argument for âvery small increases in X-risk are really badâ you might think that. On some days, I think I think that, though my views on all this arenât very stable.