I think you have an acronym collision here between HLMI = âhuman-level machine intelligenceâ = âhigh-level machine intelligenceâ. Your overall conclusion still seems right to me, but this collision made things confusing.
Details
I got confused because the evidence provided in footnote 11 didnât seem (to me) like it implied âthat the researchers simply werenât thinking very hard about the questionsâ. Why would âhuman-level machine intelligenceâ imply the ability to automate the labour of all humans?
My confusion was resolved by looking up the definition of HLMI in part 4 of Bio Anchors. There, HLMI is referring to âhigh-level machine intelligenceâ. If you go back to Grace et al. 2017, they defined this as:
âHigh-level machine intelligenceâ (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.
This seems stronger to me than human-level! Even âAI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancementâ (the definition of PASTA above) could leave some labour out, but this definition does not.
I think your conclusion is still right. There shouldnât have been a discrepancy between the forecasts for HLMI and âfull automationâ (defined as âwhen for any occupation, machines could be built to carry out the task better and more cheaply than human workersâ). Similarly, the expected date for the automation of AI research, a job done by human workers, should not be after the expected date for HLMI.
Still, I would change the acronym and maybe remove the section of the footnote about individual milestones; the milestones forecasting was a separate survey question from the forecasting of automation of specific human jobs, and it was more confusing to skim through Grace et al. 2017 expecting those data points to have come from the same question.
Thanks for the correction! Iâve corrected the term in the Cold Takes version. (Iâm confining corrections to that version rather than correct there, here, LessWrong, the PDF, etc. every time; also, editing posts here can cause bugs.)
I think you have an acronym collision here between HLMI = âhuman-level machine intelligenceâ = âhigh-level machine intelligenceâ. Your overall conclusion still seems right to me, but this collision made things confusing.
Details
I got confused because the evidence provided in footnote 11 didnât seem (to me) like it implied âthat the researchers simply werenât thinking very hard about the questionsâ. Why would âhuman-level machine intelligenceâ imply the ability to automate the labour of all humans?
My confusion was resolved by looking up the definition of HLMI in part 4 of Bio Anchors. There, HLMI is referring to âhigh-level machine intelligenceâ. If you go back to Grace et al. 2017, they defined this as:
This seems stronger to me than human-level! Even âAI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancementâ (the definition of PASTA above) could leave some labour out, but this definition does not.
I think your conclusion is still right. There shouldnât have been a discrepancy between the forecasts for HLMI and âfull automationâ (defined as âwhen for any occupation, machines could be built to carry out the task better and more cheaply than human workersâ). Similarly, the expected date for the automation of AI research, a job done by human workers, should not be after the expected date for HLMI.
Still, I would change the acronym and maybe remove the section of the footnote about individual milestones; the milestones forecasting was a separate survey question from the forecasting of automation of specific human jobs, and it was more confusing to skim through Grace et al. 2017 expecting those data points to have come from the same question.
Thanks for the correction! Iâve corrected the term in the Cold Takes version. (Iâm confining corrections to that version rather than correct there, here, LessWrong, the PDF, etc. every time; also, editing posts here can cause bugs.)