Yeah. I actually work on it right now (governance/forecasting not technical stuff obviously) because it’s the job that I managed to get when I really needed a job (and its interesting), but I remain personally skeptical. Though it is hard to tell the difference in such a speculative context between 1 in 1000 (which probably means it is actually worth working on in expectation, at least if you expect X-risk to drop dramatically if AI is negotiated successfully and have totalist sympathies in population ethics) and 1 in 1 million* (which might look worth working on in expectation if taken literally, but is probably really a signal that it might be way lower for all you know.) I don’t have anything terribly interesting to say about why I’m skeptical: just boring stuff about how prediction is hard, and your prior should be low on a very specific future path, and social epistemology worries about bubbles and ideas that pattern match to religious/apocalyptic, combined with a general feeling that the AI risk stuff I have read is not rigorous enough to (edit, missing bit here) overcome my low prior.
‘I wonder if there’s a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.’
I hadn’t heard that suggested before. But you will have a much better idea of the distribution of opinion than me. My guess would be that the divide will be LW/rationalist verses not. “Low” is also ambiguous of course: compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have “low” estimates, but they are likely a lot higher than AI X-risk “skeptics” outside EA.
Yeah. I actually work on it right now (governance/forecasting not technical stuff obviously) because it’s the job that I managed to get when I really needed a job (and its interesting), but I remain personally skeptical. Though it is hard to tell the difference in such a speculative context between 1 in 1000 (which probably means it is actually worth working on in expectation, at least if you expect X-risk to drop dramatically if AI is negotiated successfully and have totalist sympathies in population ethics) and 1 in 1 million* (which might look worth working on in expectation if taken literally, but is probably really a signal that it might be way lower for all you know.) I don’t have anything terribly interesting to say about why I’m skeptical: just boring stuff about how prediction is hard, and your prior should be low on a very specific future path, and social epistemology worries about bubbles and ideas that pattern match to religious/apocalyptic, combined with a general feeling that the AI risk stuff I have read is not rigorous enough to (edit, missing bit here) overcome my low prior.
‘I wonder if there’s a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.’
I hadn’t heard that suggested before. But you will have a much better idea of the distribution of opinion than me. My guess would be that the divide will be LW/rationalist verses not. “Low” is also ambiguous of course: compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have “low” estimates, but they are likely a lot higher than AI X-risk “skeptics” outside EA.
*Seems too low to me, but I am of course biased.
Christiano says ~22% (“but you should treat these numbers as having 0.5 significant figures”) without a time-bound; and Carlsmith says “>10%” (see bottom of abstract) by 2070. So no big difference there.
Fair point. Carlsmith said less originally.