(I have not read the full report yet, I’m merely commenting on a section in the condensed report.)
Big tech companies are incentivized to act irresponsibly
Whilst AI companies are set to earn enormous profits from developing powerful AI systems, the costs these systems impose are borne by society at large. These costs are negative externalities, like those imposed on the public by chemical companies that pollute rivers, or large banks whose failure poses systemic risks.
Further, as companies engage in fierce competition to build AI systems, they are more inclined to cut corners in a race to the bottom. In such a race, even well-meaning companies will have fewer and fewer resources dedicated to tackling the harms and threats their systems create. Of course, AI firms may take some action to mitigate risks from their products 4 - but there are well-studied reasons to suspect they will underinvest in such safety measures.
This argument seems wrong to me. While AI does pose negative externalities—like any technology—it does not seem unusual among technologies in this specific respect (beyond the fact that both the positive and negative effects will be large). Indeed, if AI poses an existential risk, that risk is borne by both the developers and general society. Therefore, it’s unclear whether there is actually an incentive for developers to dangerously “race” if they are fully rational and informed of all relevant facts.
In my opinion, the main risk of AI does not come from negative externalities, but rather from a more fundamental knowledge problem: we cannot easily predict the results of deploying AI widely, over long time horizons. This problem is real but it does not by itself imply that individual AI developers are incentivized to act irresponsibly in the way described by the article; instead, it implies that developers may act unwisely out of ignorance of the full consequences of their actions.
These two concepts—negative externalities, and the knowledge problem—should be carefully distinguished, as they have different implications for how to regulate AI optimally. If AI poses large negative externalities (and these are not outweighed by their positive externalities), then the solution could look like a tax on AI development, or regulation with a similar effect. On the other hand, if the problem posed by AI is that it is difficult to predict how AI will impact the world in the coming decades, then the solution plausibly looks more like investigating how AI will likely unfold and affect the world.
(I have not read the full report yet, I’m merely commenting on a section in the condensed report.)
This argument seems wrong to me. While AI does pose negative externalities—like any technology—it does not seem unusual among technologies in this specific respect (beyond the fact that both the positive and negative effects will be large). Indeed, if AI poses an existential risk, that risk is borne by both the developers and general society. Therefore, it’s unclear whether there is actually an incentive for developers to dangerously “race” if they are fully rational and informed of all relevant facts.
In my opinion, the main risk of AI does not come from negative externalities, but rather from a more fundamental knowledge problem: we cannot easily predict the results of deploying AI widely, over long time horizons. This problem is real but it does not by itself imply that individual AI developers are incentivized to act irresponsibly in the way described by the article; instead, it implies that developers may act unwisely out of ignorance of the full consequences of their actions.
These two concepts—negative externalities, and the knowledge problem—should be carefully distinguished, as they have different implications for how to regulate AI optimally. If AI poses large negative externalities (and these are not outweighed by their positive externalities), then the solution could look like a tax on AI development, or regulation with a similar effect. On the other hand, if the problem posed by AI is that it is difficult to predict how AI will impact the world in the coming decades, then the solution plausibly looks more like investigating how AI will likely unfold and affect the world.