Here are some questions of mine. I haven’t done a ton to follow discussions of AI safety, which means my questions will either be embarrassingly naive or will offer a critical outside perspective. Please don’t use any that fit the former case :)
It seems like there’s a decent chance that whole brain emulations will come before de novo AI. Is there any “friendly WBE” work it makes sense to do to prepare for this case, analogous to “friendly AI” work?
Around the time AGI comes in to existence, it’s important how cheap and fast the hardware available to run it on is. If hardware is relatively expensive and slow, we can anticipate a slower (and presumably more graceful) transition. Is there anything we can do to nudge the hardware industry away from developing ever-faster chips, so hardware will be relatively expensive and slow at the time of the transition? For example, Musk could try to hire away researchers at semiconductor firms to work on batteries or rocket ships, but this could only be a temporary solution: the wages for such researchers would rise in response to the shortage, likely leading to more students going in to semiconductor research. (Hiring away professors that teach semiconductor research might be a better idea, assuming American companies are bad at training employees.)
In this essay, I wrote: “At some point our AGI will be just as smart as the world’s AI researchers, but we can hardly expect to start seeing super-fast AI progress at that point, because the world’s AI researchers haven’t produced super-fast AI progress.” I still haven’t seen a persuasive refutation of this position (though I haven’t looked very hard). So: Given that human AI researchers haven’t produced a FOOM, is there any reason to expect that an AI equal to the level of the human AI research community would produce a FOOM? (EDIT: A better framing might be whether or not we can expect chunky, important AI insights to be discovered in the future, or whether AGI will come out of many small cumulative developments. I suppose the idea that the brain implements a single cortical algorithm should push us in the direction of believing there is at least one chunky undiscovered insight?)
Here are some questions of mine. I haven’t done a ton to follow discussions of AI safety, which means my questions will either be embarrassingly naive or will offer a critical outside perspective. Please don’t use any that fit the former case :)
It seems like there’s a decent chance that whole brain emulations will come before de novo AI. Is there any “friendly WBE” work it makes sense to do to prepare for this case, analogous to “friendly AI” work?
Around the time AGI comes in to existence, it’s important how cheap and fast the hardware available to run it on is. If hardware is relatively expensive and slow, we can anticipate a slower (and presumably more graceful) transition. Is there anything we can do to nudge the hardware industry away from developing ever-faster chips, so hardware will be relatively expensive and slow at the time of the transition? For example, Musk could try to hire away researchers at semiconductor firms to work on batteries or rocket ships, but this could only be a temporary solution: the wages for such researchers would rise in response to the shortage, likely leading to more students going in to semiconductor research. (Hiring away professors that teach semiconductor research might be a better idea, assuming American companies are bad at training employees.)
In this essay, I wrote: “At some point our AGI will be just as smart as the world’s AI researchers, but we can hardly expect to start seeing super-fast AI progress at that point, because the world’s AI researchers haven’t produced super-fast AI progress.” I still haven’t seen a persuasive refutation of this position (though I haven’t looked very hard). So: Given that human AI researchers haven’t produced a FOOM, is there any reason to expect that an AI equal to the level of the human AI research community would produce a FOOM? (EDIT: A better framing might be whether or not we can expect chunky, important AI insights to be discovered in the future, or whether AGI will come out of many small cumulative developments. I suppose the idea that the brain implements a single cortical algorithm should push us in the direction of believing there is at least one chunky undiscovered insight?)