By (stupid) analogy, all the preparations for a wedding would be undermined if the couple got into a traffic accident on the way to the ceremony; this does not justify spending ~all the wedding budget on car safety.
This is a stupid analogy! (Traffic accidents aren’t very likely.) A better analogy would be “all the preparations for a wedding would be undermined if the couple weren’t able to to be together because one was stranded on Mars with no hope of escape. This justifies spending all the wedding budget on trying to rescue them.” Or perhaps even better: “all the preparations for a wedding would be undermined if the couple probably won’t be able to be together, because one taking part in a mission to Mars that half the engineers and scientists on the guest list are convinced will be a death trap (for detailed technical reasons). This justifies spending all the wedding budget on trying to stop the mission from going ahead.”
If not, what are the main obstacles to reaching existential security from here?
and collected the obstacles, you might assemble a list like this one, which might update you toward AI x-risk being “overwhelmingly likely”. (Personally, if I had to put a number on it, I’d say 80%.)
Your next point seems somewhat of a straw man?
If I tell someone the world will be run by dolphins in the year 2050, and they disagree, I can reply, “oh yeah, well you tell me what the world looks like in 2050”
No, the correct reply is that dolphins won’t run the world because they can’t develop technology down to their physical form (no opposable thumbs etc), and they won’t be able to evolve their physical form in such a short time (even with help from human collaborators)[1]. i.e. an object level rebuttal.
The opponents of these arguments were not able to describe the ways that the world could avoid these dire fates in detail
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
Altogether, I think you’re coming from a reasonable but different position, that takeover risk from ASI is very high (sounds like 60–99% given ASI?)
I do think this axis of disagreement might not be as sharp as it seems, though — suppose person A has [9]0% p(takeover) and person B is on 1%. Assuming the same marginal tractability and neglectedness between takeover and non-takeover work, person A thinks that takeover-focused work is [9]0× more important; but non-takeover work is 10/99≈0.[1] times as important, compared to person B.
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
And they won’t be able to be helped by ASIs either, because the control/alignment problem will remain unsolved (and probably unsolvable, for reasons x, y, z...)
This is a stupid analogy! (Traffic accidents aren’t very likely.)
Oh, I didn’t mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn’t imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
I think Wei Dei’s reply articulates my position well:
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
No, the correct reply is that dolphins won’t run the world because they can’t develop technology
That’s right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn’t especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
Yes (but also, I don’t think the abstract point is adding anything, because of the risk actually being significant.)
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
This does seem like splitting hairs. Most of Wei Dai’s linked list is about AI takeover x-risk (or at least x-risk as a result of actions that AI might take, rather than actions that humans controlling AIs might take). Also, I’m not sure where “century” comes from? We’re talking about the next 5-10 years, mostly.
I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
I think there are a number of intuitions and intuition pumps that are useful here: Intelligence being evolutionarily favourable (in a generalised Darwinism sense); there being no evidence for moral realism (an objective ethics of the universe existing independently of humans) being true (->Orthogonality Thesis), or humanity having a special (divine) place in the universe (we don’t have plot armour); convergent instrumental goals being overdetermined; security mindset (I think most people who have low p(doom)s probably lack this?).
That said, we also must engage with the best counter-arguments to steelman our positions. I will come back to your linked example.
Thanks for the reply.
This is a stupid analogy! (Traffic accidents aren’t very likely.) A better analogy would be “all the preparations for a wedding would be undermined if the couple weren’t able to to be together because one was stranded on Mars with no hope of escape. This justifies spending all the wedding budget on trying to rescue them.” Or perhaps even better: “all the preparations for a wedding would be undermined if the couple probably won’t be able to be together, because one taking part in a mission to Mars that half the engineers and scientists on the guest list are convinced will be a death trap (for detailed technical reasons). This justifies spending all the wedding budget on trying to stop the mission from going ahead.”
I think Wei Dei’s reply articulates my position well:
Your next point seems somewhat of a straw man?
No, the correct reply is that dolphins won’t run the world because they can’t develop technology down to their physical form (no opposable thumbs etc), and they won’t be able to evolve their physical form in such a short time (even with help from human collaborators)[1]. i.e. an object level rebuttal.
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
I’d say ~90% (and the remaining 10% is mostly exotic factors beyond our control [footnote 10 of linked post]).
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
And they won’t be able to be helped by ASIs either, because the control/alignment problem will remain unsolved (and probably unsolvable, for reasons x, y, z...)
Oh, I didn’t mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn’t imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
That’s right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn’t especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.
Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
Makes sense.
Yes (but also, I don’t think the abstract point is adding anything, because of the risk actually being significant.)
This does seem like splitting hairs. Most of Wei Dai’s linked list is about AI takeover x-risk (or at least x-risk as a result of actions that AI might take, rather than actions that humans controlling AIs might take). Also, I’m not sure where “century” comes from? We’re talking about the next 5-10 years, mostly.
I think there are a number of intuitions and intuition pumps that are useful here: Intelligence being evolutionarily favourable (in a generalised Darwinism sense); there being no evidence for moral realism (an objective ethics of the universe existing independently of humans) being true (->Orthogonality Thesis), or humanity having a special (divine) place in the universe (we don’t have plot armour); convergent instrumental goals being overdetermined; security mindset (I think most people who have low p(doom)s probably lack this?).
That said, we also must engage with the best counter-arguments to steelman our positions. I will come back to your linked example.