Wow, this is a well-written, well-researched post. Thanks for putting it together!
Factors that would lend themselves to AI restraint
The cost of constant percentage improvements in Moore’s law have gotten more and more expensive. State of the art chip fabs now cost many billions to build.
Preventing new discoveries in AI from being published might just be in the near-term interest of countries that view having differential AI progress as a strategic advantage. The same can be said of companies.
Generative models that rely on open-source datasets like those from the common crawl could run into issues with copyrights if they start invading the markets of the artists and writers whose data was used to train them.
Factors working against restraint
Widespread distribution of computational resources makes it difficult to prevent progress from being made on AI in the near term
Many countries (including China and the US) view AI as being relevant to their strategic political dominance.
The general public does not take the idea of dangerous AI seriously yet other than a small focus on AI bias, which does not seem particularly relevant to the most concerning aspects of AGI. It will be very difficult to rally public support for legislation unless this changes.
The short-term benefits of Moore’s law continuing are widespread. If people can’t buy a better iPhone next year because we banned new fabs, they are going to be upset.
It would have been nice to read some in-text examples of the ban-enabling features lacking in AI. I clicked on the links you provided but there was too much information for it to be worth my time to go through them.
Wow, this is a well-written, well-researched post. Thanks for putting it together!
Factors that would lend themselves to AI restraint
The cost of constant percentage improvements in Moore’s law have gotten more and more expensive. State of the art chip fabs now cost many billions to build.
Preventing new discoveries in AI from being published might just be in the near-term interest of countries that view having differential AI progress as a strategic advantage. The same can be said of companies.
Generative models that rely on open-source datasets like those from the common crawl could run into issues with copyrights if they start invading the markets of the artists and writers whose data was used to train them.
Factors working against restraint
Widespread distribution of computational resources makes it difficult to prevent progress from being made on AI in the near term
Many countries (including China and the US) view AI as being relevant to their strategic political dominance.
The general public does not take the idea of dangerous AI seriously yet other than a small focus on AI bias, which does not seem particularly relevant to the most concerning aspects of AGI. It will be very difficult to rally public support for legislation unless this changes.
The short-term benefits of Moore’s law continuing are widespread. If people can’t buy a better iPhone next year because we banned new fabs, they are going to be upset.
Possible improvements to the post
It would have been nice to read some in-text examples of the ban-enabling features lacking in AI. I clicked on the links you provided but there was too much information for it to be worth my time to go through them.