For AGI to do most human work for <$25/hr by 2043, many things must happen.
I don’t think this is necessarily the right metric, for the same reason that I think the following statement doesn’t hold:
transformative AGI is a much higher bar than… even the unambiguous attainment of expensive superhuman AGI
Basically, while the contest rules do say, “By ‘AGI’ we mean something like ‘AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less’” they then go on to clarify, “What we’re actually interested in is the potential existential threat posed by advanced AI systems.” I think the natural reading of this definition is that AGI that (severely threatened to) cause human extinction or the permanent disempowerment of humanity would qualify as TAI, and I think my interpretation would further be more consistent with the common definition that TAI would be “AI having an impact at least as large as the Industrial Revolution.” Further, I think expensive superhuman AGI would threaten to cause an existential catastrophe in a way that would qualify it for my interpretation.
If we then look at your list, under my interpretation, we no longer have to worry about “AGI inference costs drop below $25/hr (per human equivalent)”, nor “We invent and scale cheap, quality robots”, and possibly not others as well (such as “We massively scale production of chips and power”). If we just ignore those 2 cruxes, (and assume your other numbers hold) then we’re up to 4%. If we further ignore the one about chips & power, then we’re up to 9%.
I don’t think this is necessarily the right metric, for the same reason that I think the following statement doesn’t hold:
Basically, while the contest rules do say, “By ‘AGI’ we mean something like ‘AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less’” they then go on to clarify, “What we’re actually interested in is the potential existential threat posed by advanced AI systems.” I think the natural reading of this definition is that AGI that (severely threatened to) cause human extinction or the permanent disempowerment of humanity would qualify as TAI, and I think my interpretation would further be more consistent with the common definition that TAI would be “AI having an impact at least as large as the Industrial Revolution.” Further, I think expensive superhuman AGI would threaten to cause an existential catastrophe in a way that would qualify it for my interpretation.
If we then look at your list, under my interpretation, we no longer have to worry about “AGI inference costs drop below $25/hr (per human equivalent)”, nor “We invent and scale cheap, quality robots”, and possibly not others as well (such as “We massively scale production of chips and power”). If we just ignore those 2 cruxes, (and assume your other numbers hold) then we’re up to 4%. If we further ignore the one about chips & power, then we’re up to 9%.