It is true that private developers internalize some of the costs of AI risk. However, this is also true in the case of carbon emissions; if a company emits CO2, its shareholders do pay some costs in terms of having a more polluted atmosphere. The problem is that the private developer only pays a very small fraction of the total costs which, while still quite large in absolute terms, js plausibly worth paying for the upside. For example, if I were entirely selfish and I thought AI risk was somewhat less likely than I actually do (let’s say 10%), I would probably be willing to risk a 10% chance of death for a 90% chance of massive resource acquisition and control over the future. However, if I internalized the full costs of that 10% chance (everyone else dying and all future generations being wiped out), then I would not be willing to take that gamble.
It is true that private developers internalize some of the costs of AI risk. However, this is also true in the case of carbon emissions; if a company emits CO2, its shareholders do pay some costs in terms of having a more polluted atmosphere. The problem is that the private developer only pays a very small fraction of the total costs which, while still quite large in absolute terms, js plausibly worth paying for the upside. For example, if I were entirely selfish and I thought AI risk was somewhat less likely than I actually do (let’s say 10%), I would probably be willing to risk a 10% chance of death for a 90% chance of massive resource acquisition and control over the future. However, if I internalized the full costs of that 10% chance (everyone else dying and all future generations being wiped out), then I would not be willing to take that gamble.