I feel like the tax haven comparison doesn’t really apply, if there is a broad consensus that building AGI is risky. For example, dictators are constantly trying to stay in power. They wouldn’t want to lose it to a super intelligence. (In this sense, it would be closer to biological weapons: risky to everyone including the producer).
However, different actors will appraise the technology differently such that some people will appraise it positively, and if AGI becomes really cheap I agree that the costs of maintaining a moratorium will be enormous. But by then, alignment research has probably advanced and society could decide to carefully lift the moratorium?
So if you are concerned about a pause lasting too long, I feel like you need to spell out why it would last (way) too long.
I feel like the tax haven comparison doesn’t really apply, if there is a broad consensus that building AGI is risky.
There may not be such a consensus. Moreover, nations may be willing to take risks. Already, the current nations of the world are taking the gamble that we should burn fossil fuels, although they acknowledge the risks involved. Finally, all it takes is one successful defector nation, and the consensus is overridden. Sweden, for example, defected from the Western consensus to impose lockdowns.
For example, dictators are constantly trying to stay in power. They wouldn’t want to lose it to a super intelligence. (In this sense, it would be closer to biological weapons: risky to everyone including the producer).
Dictators are also generally interested in growing their power. For example, Putin is currently attempting to grow Russia at considerable personal risk. Unlike biological weapons, AI also promises vast prosperity, not merely an advantage in war.
However, different actors will appraise the technology differently such that some people will appraise it positively, and if AGI becomes really cheap I agree that the costs of maintaining a moratorium will be enormous. But by then, alignment research has probably advanced and society could decide to carefully lift the moratorium?
How will we decide when we’ve done enough alignment research? I don’t think the answer to this question is obvious. My guess is that at every point in time, a significant fraction of people will claim that we haven’t done “enough” research yet. People have different risk-tolerance levels and, on this question in particular, there is profound disagreement on how risky the technology even is in the first place. I don’t anticipate that there will ever be a complete consensus on AI safety, until perhaps long after the technology has been deployed. At some point, if society decides to proceed, it will do so against the wishes of many people.
So if you are concerned about a pause lasting too long, I feel like you need to spell out why it would last (way) too long.
It may not last long if people don’t actively push for that outcome. I am arguing against the idea that we should push for a long pause in the first place.
I feel like the tax haven comparison doesn’t really apply, if there is a broad consensus that building AGI is risky. For example, dictators are constantly trying to stay in power. They wouldn’t want to lose it to a super intelligence. (In this sense, it would be closer to biological weapons: risky to everyone including the producer).
However, different actors will appraise the technology differently such that some people will appraise it positively, and if AGI becomes really cheap I agree that the costs of maintaining a moratorium will be enormous. But by then, alignment research has probably advanced and society could decide to carefully lift the moratorium?
So if you are concerned about a pause lasting too long, I feel like you need to spell out why it would last (way) too long.
There may not be such a consensus. Moreover, nations may be willing to take risks. Already, the current nations of the world are taking the gamble that we should burn fossil fuels, although they acknowledge the risks involved. Finally, all it takes is one successful defector nation, and the consensus is overridden. Sweden, for example, defected from the Western consensus to impose lockdowns.
Dictators are also generally interested in growing their power. For example, Putin is currently attempting to grow Russia at considerable personal risk. Unlike biological weapons, AI also promises vast prosperity, not merely an advantage in war.
How will we decide when we’ve done enough alignment research? I don’t think the answer to this question is obvious. My guess is that at every point in time, a significant fraction of people will claim that we haven’t done “enough” research yet. People have different risk-tolerance levels and, on this question in particular, there is profound disagreement on how risky the technology even is in the first place. I don’t anticipate that there will ever be a complete consensus on AI safety, until perhaps long after the technology has been deployed. At some point, if society decides to proceed, it will do so against the wishes of many people.
It may not last long if people don’t actively push for that outcome. I am arguing against the idea that we should push for a long pause in the first place.