I was assuming that designing safe AI systems is more expensive than otherwise, suppose 10% more expensive. In a world with only a few top AI labs which are not yet ruthlessly optimized, they could probably be persuaded to sacrifice that 10%. But to try to convince a trillion dollar company to sacrifice 10% of their budget requires a whole lot of public pressure. The bosses of those companies didn’t get there without being very protective of 10% of their budgets.
You could challenge that though. You could say that alignment was instrumentally useful for creating market value. I’m not sure what my position is on that actually.
Thanks! Is the following a good summary of what you have in mind?
It would be helpful for reducing AI risk if the CEOs of top AI labs were willing to cut profits to invest in safety. That’s more likely to happen if top AI labs are relatively small at a crucial time, because [??]. And top AI labs are more likely to be small at this crucial time if takeoff is fast, because fast takeoff leaves them with less time to create and sell applications of near-AGI-level AI. So it would be helpful for reducing AI risk if takeoff were fast.
What fills in the “[??]” in the above? I could imagine a couple of possibilities:
Slow takeoff gives shareholders more clear evidence that they should be carefully attending to their big AI companies, which motivates them to hire CEOs who will ruthlessly profit-maximize (or pressure existing CEOs to do that).
Slow takeoff somehow leads to more intense AI competition, in which companies that ruthlessly profit-maximize get ahead, and this selects for ruthlessly profit-maximizing CEOs.
Additional ways of challenging those might be:
Maybe slow takeoff makes shareholders much more wealthy (both by raising their incomes and by making ~everything cheaper) --> makes them value marginal money gains less --> makes them more willing to invest in safety.
Maybe slow takeoff gives shareholders (and CEOs) more clear evidence of risks --> makes them more willing to invest in safety.
Maybe slow takeoff involves the economies of scale + time for one AI developer to build a large lead well in advance of AGI, weakening the effects of competition.
I was assuming that designing safe AI systems is more expensive than otherwise, suppose 10% more expensive. In a world with only a few top AI labs which are not yet ruthlessly optimized, they could probably be persuaded to sacrifice that 10%. But to try to convince a trillion dollar company to sacrifice 10% of their budget requires a whole lot of public pressure. The bosses of those companies didn’t get there without being very protective of 10% of their budgets.
You could challenge that though. You could say that alignment was instrumentally useful for creating market value. I’m not sure what my position is on that actually.
Thanks! Is the following a good summary of what you have in mind?
It would be helpful for reducing AI risk if the CEOs of top AI labs were willing to cut profits to invest in safety. That’s more likely to happen if top AI labs are relatively small at a crucial time, because [??]. And top AI labs are more likely to be small at this crucial time if takeoff is fast, because fast takeoff leaves them with less time to create and sell applications of near-AGI-level AI. So it would be helpful for reducing AI risk if takeoff were fast.
What fills in the “[??]” in the above? I could imagine a couple of possibilities:
Slow takeoff gives shareholders more clear evidence that they should be carefully attending to their big AI companies, which motivates them to hire CEOs who will ruthlessly profit-maximize (or pressure existing CEOs to do that).
Slow takeoff somehow leads to more intense AI competition, in which companies that ruthlessly profit-maximize get ahead, and this selects for ruthlessly profit-maximizing CEOs.
Additional ways of challenging those might be:
Maybe slow takeoff makes shareholders much more wealthy (both by raising their incomes and by making ~everything cheaper) --> makes them value marginal money gains less --> makes them more willing to invest in safety.
Maybe slow takeoff gives shareholders (and CEOs) more clear evidence of risks --> makes them more willing to invest in safety.
Maybe slow takeoff involves the economies of scale + time for one AI developer to build a large lead well in advance of AGI, weakening the effects of competition.
This all seems reasonable.