Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.
This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.
Yes, to emphasize, the post is meant to define the situation under consideration as: “something close to a 10x increase in growth; or death”. We’re interested in this scenario only because it’s the modal scenario in the particular world of LW/EA/AI safety.
The logic of the argument does not apply as forcefully to “smaller” changes (which could potentially still be quite large), and would not apply at all if AI did not increase growth (ie did not decrease marginal utility of consumption)!
Briefly, to reiterate / expand on a point made by a few other comments: I think the title is somewhat misleading, because it conflates expecting aligned AGI with expecting high growth. People could be expecting aligned AGI but (correctly or incorrectly) not expecting it to dramatically raise the growth rate.
This divergence in expectations isn’t just a technical possibility; a survey of economists attending the NBER conference on the economics of AI last year revealed that most of them do not expect AGI, when it arrives, to dramatically raise the growth rate. The survey should be out in a few weeks, and I’ll try to remember to link to it here when it is.
Yes, to emphasize, the post is meant to define the situation under consideration as: “something close to a 10x increase in growth; or death”. We’re interested in this scenario only because it’s the modal scenario in the particular world of LW/EA/AI safety.
The logic of the argument does not apply as forcefully to “smaller” changes (which could potentially still be quite large), and would not apply at all if AI did not increase growth (ie did not decrease marginal utility of consumption)!