Thanks, have changed it to 30%, given the median answer to question 2 (level of existential risk from “AI systems not doing/optimizing what the people deploying them wanted/intended”).
I’ll note that I find this somewhat surprising. What are the main mechanisms whereby AGI ends up default aligned/safe? Or are most people surveyed thinking that alignment will be solved in time (/is already essentially solved)? Or are people putting significant weight on non-existential GCR-type scenarios?
It’s wrong.
Thanks, have changed it to 30%, given the median answer to question 2 (level of existential risk from “AI systems not doing/optimizing what the people deploying them wanted/intended”).
I’ll note that I find this somewhat surprising. What are the main mechanisms whereby AGI ends up default aligned/safe? Or are most people surveyed thinking that alignment will be solved in time (/is already essentially solved)? Or are people putting significant weight on non-existential GCR-type scenarios?
Some relevant writing:
AN #80: Why AI risk might be solved without additional intervention from longtermists
Is power-seeking an existential risk (AN #170)
Late 2021 MIRI conversations