I think you have an acronym collision here between HLMI = āhuman-level machine intelligenceā = āhigh-level machine intelligenceā. Your overall conclusion still seems right to me, but this collision made things confusing.
Details
I got confused because the evidence provided in footnote 11 didnāt seem (to me) like it implied āthat the researchers simply werenāt thinking very hard about the questionsā. Why would āhuman-level machine intelligenceā imply the ability to automate the labour of all humans?
My confusion was resolved by looking up the definition of HLMI in part 4 of Bio Anchors. There, HLMI is referring to āhigh-level machine intelligenceā. If you go back to Grace et al. 2017, they defined this as:
āHigh-level machine intelligenceā (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.
This seems stronger to me than human-level! Even āAI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancementā (the definition of PASTA above) could leave some labour out, but this definition does not.
I think your conclusion is still right. There shouldnāt have been a discrepancy between the forecasts for HLMI and āfull automationā (defined as āwhen for any occupation, machines could be built to carry out the task better and more cheaply than human workersā). Similarly, the expected date for the automation of AI research, a job done by human workers, should not be after the expected date for HLMI.
Still, I would change the acronym and maybe remove the section of the footnote about individual milestones; the milestones forecasting was a separate survey question from the forecasting of automation of specific human jobs, and it was more confusing to skim through Grace et al. 2017 expecting those data points to have come from the same question.
Thanks for the correction! Iāve corrected the term in the Cold Takes version. (Iām confining corrections to that version rather than correct there, here, LessWrong, the PDF, etc. every time; also, editing posts here can cause bugs.)
I think you have an acronym collision here between HLMI = āhuman-level machine intelligenceā = āhigh-level machine intelligenceā. Your overall conclusion still seems right to me, but this collision made things confusing.
Details
I got confused because the evidence provided in footnote 11 didnāt seem (to me) like it implied āthat the researchers simply werenāt thinking very hard about the questionsā. Why would āhuman-level machine intelligenceā imply the ability to automate the labour of all humans?
My confusion was resolved by looking up the definition of HLMI in part 4 of Bio Anchors. There, HLMI is referring to āhigh-level machine intelligenceā. If you go back to Grace et al. 2017, they defined this as:
This seems stronger to me than human-level! Even āAI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancementā (the definition of PASTA above) could leave some labour out, but this definition does not.
I think your conclusion is still right. There shouldnāt have been a discrepancy between the forecasts for HLMI and āfull automationā (defined as āwhen for any occupation, machines could be built to carry out the task better and more cheaply than human workersā). Similarly, the expected date for the automation of AI research, a job done by human workers, should not be after the expected date for HLMI.
Still, I would change the acronym and maybe remove the section of the footnote about individual milestones; the milestones forecasting was a separate survey question from the forecasting of automation of specific human jobs, and it was more confusing to skim through Grace et al. 2017 expecting those data points to have come from the same question.
Thanks for the correction! Iāve corrected the term in the Cold Takes version. (Iām confining corrections to that version rather than correct there, here, LessWrong, the PDF, etc. every time; also, editing posts here can cause bugs.)