Thanks! Is the following a good summary of what you have in mind?
It would be helpful for reducing AI risk if the CEOs of top AI labs were willing to cut profits to invest in safety. That’s more likely to happen if top AI labs are relatively small at a crucial time, because [??]. And top AI labs are more likely to be small at this crucial time if takeoff is fast, because fast takeoff leaves them with less time to create and sell applications of near-AGI-level AI. So it would be helpful for reducing AI risk if takeoff were fast.
What fills in the “[??]” in the above? I could imagine a couple of possibilities:
Slow takeoff gives shareholders more clear evidence that they should be carefully attending to their big AI companies, which motivates them to hire CEOs who will ruthlessly profit-maximize (or pressure existing CEOs to do that).
Slow takeoff somehow leads to more intense AI competition, in which companies that ruthlessly profit-maximize get ahead, and this selects for ruthlessly profit-maximizing CEOs.
Additional ways of challenging those might be:
Maybe slow takeoff makes shareholders much more wealthy (both by raising their incomes and by making ~everything cheaper) --> makes them value marginal money gains less --> makes them more willing to invest in safety.
Maybe slow takeoff gives shareholders (and CEOs) more clear evidence of risks --> makes them more willing to invest in safety.
Maybe slow takeoff involves the economies of scale + time for one AI developer to build a large lead well in advance of AGI, weakening the effects of competition.
Thanks! Is the following a good summary of what you have in mind?
It would be helpful for reducing AI risk if the CEOs of top AI labs were willing to cut profits to invest in safety. That’s more likely to happen if top AI labs are relatively small at a crucial time, because [??]. And top AI labs are more likely to be small at this crucial time if takeoff is fast, because fast takeoff leaves them with less time to create and sell applications of near-AGI-level AI. So it would be helpful for reducing AI risk if takeoff were fast.
What fills in the “[??]” in the above? I could imagine a couple of possibilities:
Slow takeoff gives shareholders more clear evidence that they should be carefully attending to their big AI companies, which motivates them to hire CEOs who will ruthlessly profit-maximize (or pressure existing CEOs to do that).
Slow takeoff somehow leads to more intense AI competition, in which companies that ruthlessly profit-maximize get ahead, and this selects for ruthlessly profit-maximizing CEOs.
Additional ways of challenging those might be:
Maybe slow takeoff makes shareholders much more wealthy (both by raising their incomes and by making ~everything cheaper) --> makes them value marginal money gains less --> makes them more willing to invest in safety.
Maybe slow takeoff gives shareholders (and CEOs) more clear evidence of risks --> makes them more willing to invest in safety.
Maybe slow takeoff involves the economies of scale + time for one AI developer to build a large lead well in advance of AGI, weakening the effects of competition.
This all seems reasonable.