(Only attempting to answer this because I want to practice thinking like Buck, feel free to ignore)
Now that I think about it, I don’t think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there
My understanding is that Buck defines AGI to point at a cluster of things such that technical AI Safety work (as opposed to, eg., AI policy work or AI safety movement building, or other things he can be doing) is likely to be directly useful. You can imagine that “whole-brain emulation safety” will look very different as a problem to tackle, since you can rely much more on things like “human values”, introspection, the psychology literature, etc.
(Only attempting to answer this because I want to practice thinking like Buck, feel free to ignore)
My understanding is that Buck defines AGI to point at a cluster of things such that technical AI Safety work (as opposed to, eg., AI policy work or AI safety movement building, or other things he can be doing) is likely to be directly useful. You can imagine that “whole-brain emulation safety” will look very different as a problem to tackle, since you can rely much more on things like “human values”, introspection, the psychology literature, etc.