Thanks for the comment. I agree that if you think AI takeover is the overwhelmingly most likely outcome from developing ASI, then preventing takeover (including by preventing ASI) should be your strong focus. Some comments, though —
Just because failing at alignment undermines ~every other issue, doesn’t mean that working on alignment is the only or overwehelmingly most important thing.[1] Tractability and likelihood also matters.
I’m not sure I buy that things are so stark as “there are no arguments against AI takeover”, see e.g. Katja Grace’s post here. I also think there are cases where someone presents you with an argument that superficially drives toward a conclusion that sounds unlikely, and it’s legitimate to be skeptical of the conclusion even if you can’t spell out exactly where the argument is going wrong (e.g. the two-envelope “paradox”). That’s not to say you can justify not engaging with the theoretical arguments whenever you’re uncomfortable with where they point, just that humility deducing bold claims about the future on theoretical grounds cuts both ways.
Relatedly, I don’t think you don’t need to be able to describe alternative outcomes in detail to reject a prediction about how the world goes. If I tell someone the world will be run by dolphins in the year 2050, and they disagree, I can reply, “oh yeah, well you tell me what the world looks like in 2050”, and their failure to describe their median world in detail doesn’t strongly support the dolphin hypothesis.[2]
“Default” doesn’t necessarily mean “unconditionally likely” IMO. Here I take it to mean something more like “conditioning on no specific response and/or targeted countermeasures”. Though I guess it’s baked into the meaning of “default” that it’s unconditionally plausible (like, ⩾5%?) — it would be misleading to say “the default outcome from this road trip is that we all die (if we don’t steer out of oncoming traffic)”.
In theory, one could work on making outcomes from AI takeover less bad, as well as making them less likely (though less clear what this looks like).
Altogether, I think you’re coming from a reasonable but different position, that takeover risk from ASI is very high (sounds like 60–99% given ASI?) I agree that kinds of preparedness not focused on avoiding takeover look less important on this view (largely because they matter in fewer worlds). I do think this axis of disagreement might not be as sharp as it seems, though — suppose person A has 60% p(takeover) and person B is on 1%. Assuming the same marginal tractability and neglectedness between takeover and non-takeover work, person A thinks that takeover-focused work is 60× more important; but non-takeover work is 40/99≈0.4 times as important, compared to person B.
By (stupid) analogy, all the preparations for a wedding would be undermined if the couple got into a traffic accident on the way to the ceremony; this does not justify spending ~all the wedding budget on car safety.
Again by analogy, there were some superficially plausible arguments in the 1970s or thereabouts that population growth would exceed the world’s carrying capacity, and we’d run out of many basic materials, and there would be a kind of system collapse by 2000. The opponents of these arguments were not able to describe the ways that the world could avoid these dire fates in detail (they could not describe the specific tech advances which could raise agricultural productivity, or keep materials prices relatively level, for instance).
By (stupid) analogy, all the preparations for a wedding would be undermined if the couple got into a traffic accident on the way to the ceremony; this does not justify spending ~all the wedding budget on car safety.
This is a stupid analogy! (Traffic accidents aren’t very likely.) A better analogy would be “all the preparations for a wedding would be undermined if the couple weren’t able to to be together because one was stranded on Mars with no hope of escape. This justifies spending all the wedding budget on trying to rescue them.” Or perhaps even better: “all the preparations for a wedding would be undermined if the couple probably won’t be able to be together, because one taking part in a mission to Mars that half the engineers and scientists on the guest list are convinced will be a death trap (for detailed technical reasons). This justifies spending all the wedding budget on trying to stop the mission from going ahead.”
If not, what are the main obstacles to reaching existential security from here?
and collected the obstacles, you might assemble a list like this one, which might update you toward AI x-risk being “overwhelmingly likely”. (Personally, if I had to put a number on it, I’d say 80%.)
Your next point seems somewhat of a straw man?
If I tell someone the world will be run by dolphins in the year 2050, and they disagree, I can reply, “oh yeah, well you tell me what the world looks like in 2050”
No, the correct reply is that dolphins won’t run the world because they can’t develop technology down to their physical form (no opposable thumbs etc), and they won’t be able to evolve their physical form in such a short time (even with help from human collaborators)[1]. i.e. an object level rebuttal.
The opponents of these arguments were not able to describe the ways that the world could avoid these dire fates in detail
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
Altogether, I think you’re coming from a reasonable but different position, that takeover risk from ASI is very high (sounds like 60–99% given ASI?)
I do think this axis of disagreement might not be as sharp as it seems, though — suppose person A has [9]0% p(takeover) and person B is on 1%. Assuming the same marginal tractability and neglectedness between takeover and non-takeover work, person A thinks that takeover-focused work is [9]0× more important; but non-takeover work is 10/99≈0.[1] times as important, compared to person B.
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
And they won’t be able to be helped by ASIs either, because the control/alignment problem will remain unsolved (and probably unsolvable, for reasons x, y, z...)
This is a stupid analogy! (Traffic accidents aren’t very likely.)
Oh, I didn’t mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn’t imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
I think Wei Dei’s reply articulates my position well:
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
No, the correct reply is that dolphins won’t run the world because they can’t develop technology
That’s right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn’t especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
Yes (but also, I don’t think the abstract point is adding anything, because of the risk actually being significant.)
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
This does seem like splitting hairs. Most of Wei Dai’s linked list is about AI takeover x-risk (or at least x-risk as a result of actions that AI might take, rather than actions that humans controlling AIs might take). Also, I’m not sure where “century” comes from? We’re talking about the next 5-10 years, mostly.
I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
I think there are a number of intuitions and intuition pumps that are useful here: Intelligence being evolutionarily favourable (in a generalised Darwinism sense); there being no evidence for moral realism (an objective ethics of the universe existing independently of humans) being true (->Orthogonality Thesis), or humanity having a special (divine) place in the universe (we don’t have plot armour); convergent instrumental goals being overdetermined; security mindset (I think most people who have low p(doom)s probably lack this?).
That said, we also must engage with the best counter-arguments to steelman our positions. I will come back to your linked example.
Thanks for the comment. I agree that if you think AI takeover is the overwhelmingly most likely outcome from developing ASI, then preventing takeover (including by preventing ASI) should be your strong focus. Some comments, though —
Just because failing at alignment undermines ~every other issue, doesn’t mean that working on alignment is the only or overwehelmingly most important thing.[1] Tractability and likelihood also matters.
I’m not sure I buy that things are so stark as “there are no arguments against AI takeover”, see e.g. Katja Grace’s post here. I also think there are cases where someone presents you with an argument that superficially drives toward a conclusion that sounds unlikely, and it’s legitimate to be skeptical of the conclusion even if you can’t spell out exactly where the argument is going wrong (e.g. the two-envelope “paradox”). That’s not to say you can justify not engaging with the theoretical arguments whenever you’re uncomfortable with where they point, just that humility deducing bold claims about the future on theoretical grounds cuts both ways.
Relatedly, I don’t think you don’t need to be able to describe alternative outcomes in detail to reject a prediction about how the world goes. If I tell someone the world will be run by dolphins in the year 2050, and they disagree, I can reply, “oh yeah, well you tell me what the world looks like in 2050”, and their failure to describe their median world in detail doesn’t strongly support the dolphin hypothesis.[2]
“Default” doesn’t necessarily mean “unconditionally likely” IMO. Here I take it to mean something more like “conditioning on no specific response and/or targeted countermeasures”. Though I guess it’s baked into the meaning of “default” that it’s unconditionally plausible (like, ⩾5%?) — it would be misleading to say “the default outcome from this road trip is that we all die (if we don’t steer out of oncoming traffic)”.
In theory, one could work on making outcomes from AI takeover less bad, as well as making them less likely (though less clear what this looks like).
Altogether, I think you’re coming from a reasonable but different position, that takeover risk from ASI is very high (sounds like 60–99% given ASI?) I agree that kinds of preparedness not focused on avoiding takeover look less important on this view (largely because they matter in fewer worlds). I do think this axis of disagreement might not be as sharp as it seems, though — suppose person A has 60% p(takeover) and person B is on 1%. Assuming the same marginal tractability and neglectedness between takeover and non-takeover work, person A thinks that takeover-focused work is 60× more important; but non-takeover work is 40/99≈0.4 times as important, compared to person B.
By (stupid) analogy, all the preparations for a wedding would be undermined if the couple got into a traffic accident on the way to the ceremony; this does not justify spending ~all the wedding budget on car safety.
Again by analogy, there were some superficially plausible arguments in the 1970s or thereabouts that population growth would exceed the world’s carrying capacity, and we’d run out of many basic materials, and there would be a kind of system collapse by 2000. The opponents of these arguments were not able to describe the ways that the world could avoid these dire fates in detail (they could not describe the specific tech advances which could raise agricultural productivity, or keep materials prices relatively level, for instance).
Thanks for the reply.
This is a stupid analogy! (Traffic accidents aren’t very likely.) A better analogy would be “all the preparations for a wedding would be undermined if the couple weren’t able to to be together because one was stranded on Mars with no hope of escape. This justifies spending all the wedding budget on trying to rescue them.” Or perhaps even better: “all the preparations for a wedding would be undermined if the couple probably won’t be able to be together, because one taking part in a mission to Mars that half the engineers and scientists on the guest list are convinced will be a death trap (for detailed technical reasons). This justifies spending all the wedding budget on trying to stop the mission from going ahead.”
I think Wei Dei’s reply articulates my position well:
Your next point seems somewhat of a straw man?
No, the correct reply is that dolphins won’t run the world because they can’t develop technology down to their physical form (no opposable thumbs etc), and they won’t be able to evolve their physical form in such a short time (even with help from human collaborators)[1]. i.e. an object level rebuttal.
No, but they had sound theoretical arguments. I’m saying these are lacking when it comes to why it’s possible to align/control/not go extinct from ASI.
I’d say ~90% (and the remaining 10% is mostly exotic factors beyond our control [footnote 10 of linked post]).
But it’s worse than this, because the only viable solution to avoid takeover is to stop building ASI, in which case the non-takeover work is redundant (we can mostly just hope to luck out with one of the exotic factors).
And they won’t be able to be helped by ASIs either, because the control/alignment problem will remain unsolved (and probably unsolvable, for reasons x, y, z...)
Oh, I didn’t mean to imply that I think AI takeover risk is on par with traffic accident-risk. I was just illustrating the abstract point that the mere presence of a mission-ending risk doesn’t imply spending everything to prevent it. I am guessing you agree with this abstract point (but furthermore think that AI takeover risk is extremely high, and as such we should ~entirely focus on preventing it).
Maybe I’m splitting hairs, but “x-risk could be high this century as a result of AI” is not the same claim as “x-risk from AI takeover is high this century”, and I read you as making the latter claim (obviously I can’t speak for Wei Dai).
That’s right, and I do think the dolphin example was too misleading and straw-man-ish. The point I was trying to illustrate, though, is not that there is no way to refute the dolphin theory, but that failing to adequately describe the alternative outcome(s) doesn’t especially support the dolphin theory, because trying to accurately describe the future is just generally extremely hard.
Got it. I guess I see things as messier than this — I see people with very high estimates of AI takeover risk advancing arguments, and I see others advancing skeptical counter-arguments (example), and before engaging with these arguments a lot and forming one’s own views, I think it’s not obvious which sets of arguments are fundamentally unsound.
Makes sense.
Yes (but also, I don’t think the abstract point is adding anything, because of the risk actually being significant.)
This does seem like splitting hairs. Most of Wei Dai’s linked list is about AI takeover x-risk (or at least x-risk as a result of actions that AI might take, rather than actions that humans controlling AIs might take). Also, I’m not sure where “century” comes from? We’re talking about the next 5-10 years, mostly.
I think there are a number of intuitions and intuition pumps that are useful here: Intelligence being evolutionarily favourable (in a generalised Darwinism sense); there being no evidence for moral realism (an objective ethics of the universe existing independently of humans) being true (->Orthogonality Thesis), or humanity having a special (divine) place in the universe (we don’t have plot armour); convergent instrumental goals being overdetermined; security mindset (I think most people who have low p(doom)s probably lack this?).
That said, we also must engage with the best counter-arguments to steelman our positions. I will come back to your linked example.