Fair enough, maybe I was less skeptical than I thought at first and having a really good explainer was enough to dispel the little skepticism I did have. You mention Human Compatible, but also don’t really seem convinced by it, is there any convincing work you’ve found, or have you remained unconvinced through all you’ve read?
I’ve been skeptical minded about the subject from the start, and I’ve failed to find anything that convinced me. I’ve written a lot about my reasons for remaining unconvinced.
I’ve read human compatible, superintelligence, the sequences, the aforementioned wait but why intro, the writings of Holden karnofsky, and I’ve been regularly reading the arguments in this forum for the last year or two.
Of those, I found Yudkowsky the least convincing, because he tends to assume a level of AGI omnipotence that I find ludicrous, and he has a habit of overconfidently mangling the science whenever he ventures into my field of expertise (I’m a computational physicist). I find Karnofsky and Russell to have the best arguments, because they don’t rely on omnipotent AI to make their case. I think they have raised my estimates of the catastrophic risk from AI, even if my views on extinction risk remain largely unchanged.
Ah okay cool, a skeptic that has really engaged with the material. I won’t ask you your reasons because I’m sure I can find them on your substack, but I would love to know, do you have rough percentages for chance of catastrophic risk and x-risk from AI? You can restrict the estimate to the next century if that would help.
If you forced me to give numbers, I’d put the odds of catastrophe (~1 billion dead) at 1 in a thousand, and the odds of extinction at 1 in 500 thousand. Essentially, there are several plausible paths for a catastrophe to occur, but almost none for extinction. I don’t put too much stock in the actual numbers though, as I don’t think forecasting is actually useful for unbounded, long-term predictions.
Fair enough, maybe I was less skeptical than I thought at first and having a really good explainer was enough to dispel the little skepticism I did have. You mention Human Compatible, but also don’t really seem convinced by it, is there any convincing work you’ve found, or have you remained unconvinced through all you’ve read?
I’ve been skeptical minded about the subject from the start, and I’ve failed to find anything that convinced me. I’ve written a lot about my reasons for remaining unconvinced.
I’ve read human compatible, superintelligence, the sequences, the aforementioned wait but why intro, the writings of Holden karnofsky, and I’ve been regularly reading the arguments in this forum for the last year or two.
Of those, I found Yudkowsky the least convincing, because he tends to assume a level of AGI omnipotence that I find ludicrous, and he has a habit of overconfidently mangling the science whenever he ventures into my field of expertise (I’m a computational physicist). I find Karnofsky and Russell to have the best arguments, because they don’t rely on omnipotent AI to make their case. I think they have raised my estimates of the catastrophic risk from AI, even if my views on extinction risk remain largely unchanged.
Ah okay cool, a skeptic that has really engaged with the material. I won’t ask you your reasons because I’m sure I can find them on your substack, but I would love to know, do you have rough percentages for chance of catastrophic risk and x-risk from AI? You can restrict the estimate to the next century if that would help.
If you forced me to give numbers, I’d put the odds of catastrophe (~1 billion dead) at 1 in a thousand, and the odds of extinction at 1 in 500 thousand. Essentially, there are several plausible paths for a catastrophe to occur, but almost none for extinction. I don’t put too much stock in the actual numbers though, as I don’t think forecasting is actually useful for unbounded, long-term predictions.