I appreciate that this post acknowledges that there are costs to caution. I think it could’ve gone a bit further in emphasizing how these costs, while large in an absolute sense, are small relative to the risks.
The formal way to do this would be a cost-benefit analysis on longtermist grounds (perhaps with various discount rates for future lives). But I think there’s also a way to do this in less formal/wonky language, without requiring any longtermist assumptions.
If you have a technology where half of experts believe there’s a ~10% chance of extinction, the benefits need to be enormous for them to outweigh the costs of caution. I like Tristan Harris’s airplane analogy:
Imagine: would you board an airplane if 50% of airplane engineers who built it said there was a 10% chance that everybody on board dies?
Here’s another frame (that I’ve been finding useful with folks who don’t follow the technical AI risk scene much): History is full of examples of people saying that they are going to solve everyone’s problems. There are many failed messiah stories. In the case of AGI, it’s true that aligned and responsibly developed AI could do a lot of good. But when you have people saying “the risks are overblown—we’re smart and responsible enough to solve everything”, I think it’s pretty reasonable to be skeptical (on priors alone).
Finally, one thing that sometimes gets missed in this discussion is that most advocates of pause still want to get to AGI eventually. Slowing down for a few years or decades is costly, and advocates of slowdown should recognize this. But the costs are substantially lower than the risks. I think both of these messages get missed in discussions about slowdown.
Imagine: would you board an airplane if 50% of airplane engineers who built it said there was a 10% chance that everybody on board dies?
In the context of the OP, the thought experiment would need to be extended.
“Would you risk a 10% chance of a deadly crash to go to [random country]” → ~100% of people reply no.
“Would you risk a 10% of a deadly crash to go to a Utopia without material scarcity, conflict, disease?” → One would expect a much more mixed response.
The main ethical problem is that in the scenario of global AI progress, everyone is forced to board the plane, irrespective of their preferences.
I agree with you more than with Akash/Tristan Harris here, but note that death and Utopia are not the only possible outcomes! It’s more like “Would you risk a 10% of a deadly crash for a chance to go to a Utopia without material scarcity, conflict, disease”
I appreciate that this post acknowledges that there are costs to caution. I think it could’ve gone a bit further in emphasizing how these costs, while large in an absolute sense, are small relative to the risks.
The formal way to do this would be a cost-benefit analysis on longtermist grounds (perhaps with various discount rates for future lives). But I think there’s also a way to do this in less formal/wonky language, without requiring any longtermist assumptions.
If you have a technology where half of experts believe there’s a ~10% chance of extinction, the benefits need to be enormous for them to outweigh the costs of caution. I like Tristan Harris’s airplane analogy:
Here’s another frame (that I’ve been finding useful with folks who don’t follow the technical AI risk scene much): History is full of examples of people saying that they are going to solve everyone’s problems. There are many failed messiah stories. In the case of AGI, it’s true that aligned and responsibly developed AI could do a lot of good. But when you have people saying “the risks are overblown—we’re smart and responsible enough to solve everything”, I think it’s pretty reasonable to be skeptical (on priors alone).
Finally, one thing that sometimes gets missed in this discussion is that most advocates of pause still want to get to AGI eventually. Slowing down for a few years or decades is costly, and advocates of slowdown should recognize this. But the costs are substantially lower than the risks. I think both of these messages get missed in discussions about slowdown.
In the context of the OP, the thought experiment would need to be extended.
“Would you risk a 10% chance of a deadly crash to go to [random country]” → ~100% of people reply no.
“Would you risk a 10% of a deadly crash to go to a Utopia without material scarcity, conflict, disease?” → One would expect a much more mixed response.
The main ethical problem is that in the scenario of global AI progress, everyone is forced to board the plane, irrespective of their preferences.
I agree with you more than with Akash/Tristan Harris here, but note that death and Utopia are not the only possible outcomes! It’s more like “Would you risk a 10% of a deadly crash for a chance to go to a Utopia without material scarcity, conflict, disease”