In any social policy battle (climate change, racial justice, animal rights) there will be people who believe that extreme actions are necessary. It’s perhaps unusual on the Al front that one of the highest profile experts is on that extreme, but it’s still not an unusual situation. A couple of points in favour of this message having a net positive effect.
I don’t buy the debate that extreme arguments alienate people about the cause in general. This is a common assumption, but the little evidence we have suggests that extreme actions or talk might actually both increase visibility of the cause and increase support to more moderate groups. Anecdotally on the AI front @lilly seems to be seeing something similar too.
On a rational front, if he is this sure of doom, his practical solution seems to make the most sense. It shows intellectual integrity. We can’t expect someone to have a pdoom of 99% given the status quo, then just suggest better alignment strategies. From a scout mindset perspective, we need to put ourselves in the 99% doom shoes before dismissing this opinion as irrational, even if we strongly disagree with his pdoom.
(Related to 1), I feel like AI risk is still perhaps at the “Any publicity is good publicity” stage as many people are still completely unaware of it. Anything a bit wild like this which attracts more attention and debate is likely to be good. Within a few months/years this may change though as AI risk becomes truly mainstream. Outside tech bubbles it certainly isn’t yet.
People who know that they are outliers amongst experts in how likely they think X is (as I think being 99% sure of doom is, particular combined with short-ish timelines), should be cautious about taking extreme actions on the basis of an outlying view, even if they think they have performed a personal adjustment to down-weight their confidence to take account of the fact that other experts disagree, and still ended up north of 99%. Otherwise you get the problem that extreme actions are taken even when most experts think they will be bad. In that sense integrity of the kind your praising is actually potentially very bad and dangerous, even if there are some readings of “rational” on which it counts as rational.
Of course, what Eliezer is doing is not taking extreme actions, but recommending governments do so in certain circumstances, and that is much less obviously a bad thing to do, since govs will also hear from experts who are closer to the median expert.
In any social policy battle (climate change, racial justice, animal rights) there will be people who believe that extreme actions are necessary. It’s perhaps unusual on the Al front that one of the highest profile experts is on that extreme, but it’s still not an unusual situation. A couple of points in favour of this message having a net positive effect.
I don’t buy the debate that extreme arguments alienate people about the cause in general. This is a common assumption, but the little evidence we have suggests that extreme actions or talk might actually both increase visibility of the cause and increase support to more moderate groups. Anecdotally on the AI front @lilly seems to be seeing something similar too.
On a rational front, if he is this sure of doom, his practical solution seems to make the most sense. It shows intellectual integrity. We can’t expect someone to have a pdoom of 99% given the status quo, then just suggest better alignment strategies. From a scout mindset perspective, we need to put ourselves in the 99% doom shoes before dismissing this opinion as irrational, even if we strongly disagree with his pdoom.
(Related to 1), I feel like AI risk is still perhaps at the “Any publicity is good publicity” stage as many people are still completely unaware of it. Anything a bit wild like this which attracts more attention and debate is likely to be good. Within a few months/years this may change though as AI risk becomes truly mainstream. Outside tech bubbles it certainly isn’t yet.
People who know that they are outliers amongst experts in how likely they think X is (as I think being 99% sure of doom is, particular combined with short-ish timelines), should be cautious about taking extreme actions on the basis of an outlying view, even if they think they have performed a personal adjustment to down-weight their confidence to take account of the fact that other experts disagree, and still ended up north of 99%. Otherwise you get the problem that extreme actions are taken even when most experts think they will be bad. In that sense integrity of the kind your praising is actually potentially very bad and dangerous, even if there are some readings of “rational” on which it counts as rational.
Of course, what Eliezer is doing is not taking extreme actions, but recommending governments do so in certain circumstances, and that is much less obviously a bad thing to do, since govs will also hear from experts who are closer to the median expert.