I’m not arguing “AI will definitely go well by default, so no one should work on it”. I’m arguing “Longtermists currently overestimate the magnitude of AI risk”.
I also broadly agree with reallyeli:
However I really think we ought to be able to discuss guesses about what’s true merely on the level of what’s true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we’re unable to do so, that will make the difficult task of finding truth even more difficult.
And this really does have important implications: if you believe “non-robust 10% chance of AI accident risk”, maybe you’ll find that biosecurity, global governance, etc. are more important problems to work on. I haven’t checked myself—for me personally, it seems quite clear that AI safety is my comparative advantage—but I wouldn’t be surprised if on reflection I thought one of those areas was more important for EA to work on than AI safety.
I’m not arguing “AI will definitely go well by default, so no one should work on it”. I’m arguing “Longtermists currently overestimate the magnitude of AI risk”.
I do believe that, and so does Robin. I don’t know about Paul and Adam, but I wouldn’t be surprised if they thought so too.
Well, it’s unclear if Robin supports AI safety research, but yes, the other three of us do. This is because:
(Though I’ll note that I don’t think the 10% figure is robust.)
I’m not arguing “AI will definitely go well by default, so no one should work on it”. I’m arguing “Longtermists currently overestimate the magnitude of AI risk”.
I also broadly agree with reallyeli:
And this really does have important implications: if you believe “non-robust 10% chance of AI accident risk”, maybe you’ll find that biosecurity, global governance, etc. are more important problems to work on. I haven’t checked myself—for me personally, it seems quite clear that AI safety is my comparative advantage—but I wouldn’t be surprised if on reflection I thought one of those areas was more important for EA to work on than AI safety.
Thanks for the clarification Rohin!
I also agree overall with reallyeli.