Good point – thank you for drawing out that premise.
I find myself getting confused as I think about the year-to-year operationalization of a slow takeoff (the distinction between slow and fast takeoff starts to blur).
It seems like the thing we really care about is AI systems falling out of alignment with our intentions as they grow more capable, and it’s not clear where “falling out of alignment” starts in the GDP-doubling framework.
I’ll think about this more & update here once/if it crystallizes.
February 2021 update:I thought about it some more; I now feel confident that I’ll be able to deploy the gains well enough to make up for the opportunity cost.
Good point – thank you for drawing out that premise.
I find myself getting confused as I think about the year-to-year operationalization of a slow takeoff (the distinction between slow and fast takeoff starts to blur).
It seems like the thing we really care about is AI systems falling out of alignment with our intentions as they grow more capable, and it’s not clear where “falling out of alignment” starts in the GDP-doubling framework.
I’ll think about this more & update here once/if it crystallizes.
February 2021 update: I thought about it some more; I now feel confident that I’ll be able to deploy the gains well enough to make up for the opportunity cost.