“If you disagree with our admittedly imperfect guesses, we kindly ask that you supply your own preferred probabilities (or framework modifications).”
Three questions for you that would help us improve our model:
What important error do you think is made by our model?
What modification would you propose to address the error?
What impact do you think your modification would have on the resultant forecast?
Agreed. AGI can have great influence in the world just by dispatching humans.
But by the definition of transformative AGI that we use—i.e., that AGI is able to do nearly all human jobs—I don’t think it’s fair to equate “doing a job” with “hiring someone else to do the job.” To me, It would be a little silly to say “all human work has been automated” and only mean “the CEO is an AGI, but yeah everyone still has to go to work.”
Of course, if you don’t think robotics is necessary for transformative AGI, then you are welcome to remove the factor (or equivalently set it to 100%). In that case, our prediction would still be <1%.