I’m sympathetic to many of the points, but I’m somewhat puzzled by the framing that you chose in this letter.
Why AI risk might be solved without additional intervention from longtermist
Sends me the message that longtermists should care less about AI risk.
Though, the people in the “conversations” all support AI safety research. And, from Rohin’s own words:
Overall, it feels like there’s around 90% chance that AI would not cause x-risk without additional intervention by longtermists.
10% chance of existential risk from AI sounds like a problem of catastrophic proportions to me. It implies that we need many more resources spent on existential risk reduction. Though perhaps not strictly on technical AI safety. Perhaps more marginal resources should be directed to strategy-oriented research instead.
I’m not arguing “AI will definitely go well by default, so no one should work on it”. I’m arguing “Longtermists currently overestimate the magnitude of AI risk”.
I also broadly agree with reallyeli:
However I really think we ought to be able to discuss guesses about what’s true merely on the level of what’s true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we’re unable to do so, that will make the difficult task of finding truth even more difficult.
And this really does have important implications: if you believe “non-robust 10% chance of AI accident risk”, maybe you’ll find that biosecurity, global governance, etc. are more important problems to work on. I haven’t checked myself—for me personally, it seems quite clear that AI safety is my comparative advantage—but I wouldn’t be surprised if on reflection I thought one of those areas was more important for EA to work on than AI safety.
I’m not arguing “AI will definitely go well by default, so no one should work on it”. I’m arguing “Longtermists currently overestimate the magnitude of AI risk”.
I had the same reaction (checking in my head that a 10% chance still merited action).
However I really think we ought to be able to discuss guesses about what’s true merely on the level of what’s true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we’re unable to do so, that will make the difficult task of finding truth even more difficult.
I’m sympathetic to many of the points, but I’m somewhat puzzled by the framing that you chose in this letter.
Sends me the message that longtermists should care less about AI risk.
Though, the people in the “conversations” all support AI safety research. And, from Rohin’s own words:
10% chance of existential risk from AI sounds like a problem of catastrophic proportions to me. It implies that we need many more resources spent on existential risk reduction. Though perhaps not strictly on technical AI safety. Perhaps more marginal resources should be directed to strategy-oriented research instead.
I do believe that, and so does Robin. I don’t know about Paul and Adam, but I wouldn’t be surprised if they thought so too.
Well, it’s unclear if Robin supports AI safety research, but yes, the other three of us do. This is because:
(Though I’ll note that I don’t think the 10% figure is robust.)
I’m not arguing “AI will definitely go well by default, so no one should work on it”. I’m arguing “Longtermists currently overestimate the magnitude of AI risk”.
I also broadly agree with reallyeli:
And this really does have important implications: if you believe “non-robust 10% chance of AI accident risk”, maybe you’ll find that biosecurity, global governance, etc. are more important problems to work on. I haven’t checked myself—for me personally, it seems quite clear that AI safety is my comparative advantage—but I wouldn’t be surprised if on reflection I thought one of those areas was more important for EA to work on than AI safety.
Thanks for the clarification Rohin!
I also agree overall with reallyeli.
I had the same reaction (checking in my head that a 10% chance still merited action).
However I really think we ought to be able to discuss guesses about what’s true merely on the level of what’s true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we’re unable to do so, that will make the difficult task of finding truth even more difficult.