I agree with the general thrust of the post, but when analyzing technological risks I think one can get substantial evidence by just considering the projected “power level” of the technology, while you focus on evidence that this power level will lead to extinction. I agree the latter is much hard to get evidence about but I think the former is sufficient to be very worrisome without much evidence on the latter.
Specifically, re: AI you write:
we’re reliant abstract arguments that use ambiguous concepts (e.g. “objectives” and “intelligence”), rough analogies, observations of the behaviour of present-day AI systems (e.g. reinforcement learners that play videogames) that will probably be very different than future AI systems, a single datapoint (the evolution of human intelligence and values) that has a lot of important differences with the case we’re considering, and attempts to predict the incentives and beliefs of future actors in development scenarios that are still very opaque to us.
I roughly agree with all of this, but by itself the argument that we will within the next century plausibly create AI systems that are more powerful than humans (e.g. Ajeya’s timelines report) seems like enough to get the risk pretty high. I’m not sure what our prior should be on existential risk conditioned on a technology this powerful being developed, but honestly starting from 50% might not be unreasonable.
I agree with the general thrust of the post, but when analyzing technological risks I think one can get substantial evidence by just considering the projected “power level” of the technology, while you focus on evidence that this power level will lead to extinction. I agree the latter is much hard to get evidence about but I think the former is sufficient to be very worrisome without much evidence on the latter.
Specifically, re: AI you write:
I roughly agree with all of this, but by itself the argument that we will within the next century plausibly create AI systems that are more powerful than humans (e.g. Ajeya’s timelines report) seems like enough to get the risk pretty high. I’m not sure what our prior should be on existential risk conditioned on a technology this powerful being developed, but honestly starting from 50% might not be unreasonable.
Similar points were made previously e.g. by Richard Ngo with the “second species” argument, or by Joe Carlsmith in his report on x-risk from power-seeking AI: “Creating agents who are far more intelligent than us is playing with fire.”