This was great, thank you. I’ve been asking people about their reasons to work on AI safety as opposed to other world improving things, assuming they want to maximize the world improving things they do. Wonderful when people write it up without me having to ask!
One thing this post/your talk would have benefited from to make things clearer (or well, at least for me) is if you gave more detail on the question of how you define ‘AGI’, since all the cruxes depend on it.
Thank you for defining AGI as something that can do regularly smart human things and then asking the very important question how expensive that AGI is. But what are those regularly smart human things? What fraction of them would be necessary (though that depends a lot on how you define ‘task’)?
I still feel very confused about a lot of things. My impression is that AI is much better than humans at quite a few narrow tasks though this depends on the definition. If AI was suddenly much better than humans at half of all the tasks human can do, but sucked at the rest, then that wouldn’t count as artificial ‘general’ intelligence under your definition(?) but it’s unclear to me whether that would be any less transformative though this depends a lot on the cost again. Now that I think about it, I don’t think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there. I’m also not clear on whether you use the same definition as other people, whether those usually use the same one and how much all the other cruxes depend on how exactly you define AGI.
(Only attempting to answer this because I want to practice thinking like Buck, feel free to ignore)
Now that I think about it, I don’t think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there
My understanding is that Buck defines AGI to point at a cluster of things such that technical AI Safety work (as opposed to, eg., AI policy work or AI safety movement building, or other things he can be doing) is likely to be directly useful. You can imagine that “whole-brain emulation safety” will look very different as a problem to tackle, since you can rely much more on things like “human values”, introspection, the psychology literature, etc.
This was great, thank you. I’ve been asking people about their reasons to work on AI safety as opposed to other world improving things, assuming they want to maximize the world improving things they do. Wonderful when people write it up without me having to ask!
One thing this post/your talk would have benefited from to make things clearer (or well, at least for me) is if you gave more detail on the question of how you define ‘AGI’, since all the cruxes depend on it.
Thank you for defining AGI as something that can do regularly smart human things and then asking the very important question how expensive that AGI is. But what are those regularly smart human things? What fraction of them would be necessary (though that depends a lot on how you define ‘task’)?
I still feel very confused about a lot of things. My impression is that AI is much better than humans at quite a few narrow tasks though this depends on the definition. If AI was suddenly much better than humans at half of all the tasks human can do, but sucked at the rest, then that wouldn’t count as artificial ‘general’ intelligence under your definition(?) but it’s unclear to me whether that would be any less transformative though this depends a lot on the cost again. Now that I think about it, I don’t think I understand how your definition of AGI is different to the results of whole-brain emulation, apart from the fact that they used different ways to get there. I’m also not clear on whether you use the same definition as other people, whether those usually use the same one and how much all the other cruxes depend on how exactly you define AGI.
(Only attempting to answer this because I want to practice thinking like Buck, feel free to ignore)
My understanding is that Buck defines AGI to point at a cluster of things such that technical AI Safety work (as opposed to, eg., AI policy work or AI safety movement building, or other things he can be doing) is likely to be directly useful. You can imagine that “whole-brain emulation safety” will look very different as a problem to tackle, since you can rely much more on things like “human values”, introspection, the psychology literature, etc.