This is a stupid analogy! (Traffic accidents aren’t very likely.)
Oh, I didn’t mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn’t imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
I think Wei Dei’s reply articulates my position well:
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
No, the correct reply is that dolphins won’t run the world because they can’t develop technology
That’s right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn’t especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
Yes (but also, I don’t think the abstract point is adding anything, because of the risk actually being significant.)
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
This does seem like splitting hairs. Most of Wei Dai’s linked list is about AI takeover x-risk (or at least x-risk as a result of actions that AI might take, rather than actions that humans controlling AIs might take). Also, I’m not sure where “century” comes from? We’re talking about the next 5-10 years, mostly.
I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
I think there are a number of intuitions and intuition pumps that are useful here: Intelligence being evolutionarily favourable (in a generalised Darwinism sense); there being no evidence for moral realism (an objective ethics of the universe existing independently of humans) being true (->Orthogonality Thesis), or humanity having a special (divine) place in the universe (we don’t have plot armour); convergent instrumental goals being overdetermined; security mindset (I think most people who have low p(doom)s probably lack this?).
That said, we also must engage with the best counter-arguments to steelman our positions. I will come back to your linked example.
Oh, I didn’t mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn’t imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
That’s right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn’t especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.
Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
Makes sense.
Yes (but also, I don’t think the abstract point is adding anything, because of the risk actually being significant.)
This does seem like splitting hairs. Most of Wei Dai’s linked list is about AI takeover x-risk (or at least x-risk as a result of actions that AI might take, rather than actions that humans controlling AIs might take). Also, I’m not sure where “century” comes from? We’re talking about the next 5-10 years, mostly.
I think there are a number of intuitions and intuition pumps that are useful here: Intelligence being evolutionarily favourable (in a generalised Darwinism sense); there being no evidence for moral realism (an objective ethics of the universe existing independently of humans) being true (->Orthogonality Thesis), or humanity having a special (divine) place in the universe (we don’t have plot armour); convergent instrumental goals being overdetermined; security mindset (I think most people who have low p(doom)s probably lack this?).
That said, we also must engage with the best counter-arguments to steelman our positions. I will come back to your linked example.