Intelligence augmentation is generally regarded as relatively safe (and thus good to come before AI) but relatively difficult (and thus unlikely to come before AI). See Nick Bostrom’s “Paths to superintelligence” in Superintelligence (2014).
The clearest strategy that comes to mind is to make artificial neurons communicable with biological ones and then integrable with whole human neural networks.
This doesn’t make much sense to me; I’m not aware of relevant work or reasons to believe this is promising. (Disclaimer: I’m not familiar with intelligence augmentation.)
I didn’t know that in Superintelligence Bostrom talked about other paths to superintelligence, I need to read it ASAP.
This doesn’t make much sense to me; I’m not aware of relevant work or reasons to believe this is promising.
Yeah, you are probably right, and I guess what I was trying to say was that the thing that pops in my mind when I think about possible paths to make us superintelligent is a hybrid between BCI and brain emulations.
And I was imagining that maybe neuron emulations could not be that difficult or that signals from AI “neurons” (something similar to present day NN) could be enough to be recognize as neurons by the brain.
Maybe that doesn’t sound promising, but without having much knowledge in AI alignment, outer alignment sounds already like aligning human neural networks with an optimizer. And then to inner align you have to align the optimizer with an artificial neural network. This to me sound simpler: to align a type of NN with another.
But maybe it is wrong to think about the problem like that and the actual problem is easier.
I think that how important human cognitive enhancement might be depending on how quickly people think AI is coming and how transformative that AI will be. If we need aligned AI very quickly because we may all be wiped out, then that would take precedence. But if we have time, accelerating advances in human cognitive enhancement may be an extremely worthwhile endeavor. Morally and cognitively enhanced humans may be extremely motivated to do research in areas that EAs are interested in and create technology to mitigate disasters.
Intelligence augmentation is generally regarded as relatively safe (and thus good to come before AI) but relatively difficult (and thus unlikely to come before AI). See Nick Bostrom’s “Paths to superintelligence” in Superintelligence (2014).
This doesn’t make much sense to me; I’m not aware of relevant work or reasons to believe this is promising. (Disclaimer: I’m not familiar with intelligence augmentation.)
I didn’t know that in Superintelligence Bostrom talked about other paths to superintelligence, I need to read it ASAP.
Yeah, you are probably right, and I guess what I was trying to say was that the thing that pops in my mind when I think about possible paths to make us superintelligent is a hybrid between BCI and brain emulations.
And I was imagining that maybe neuron emulations could not be that difficult or that signals from AI “neurons” (something similar to present day NN) could be enough to be recognize as neurons by the brain.
Maybe that doesn’t sound promising, but without having much knowledge in AI alignment, outer alignment sounds already like aligning human neural networks with an optimizer. And then to inner align you have to align the optimizer with an artificial neural network. This to me sound simpler: to align a type of NN with another.
But maybe it is wrong to think about the problem like that and the actual problem is easier.
I think that how important human cognitive enhancement might be depending on how quickly people think AI is coming and how transformative that AI will be. If we need aligned AI very quickly because we may all be wiped out, then that would take precedence. But if we have time, accelerating advances in human cognitive enhancement may be an extremely worthwhile endeavor. Morally and cognitively enhanced humans may be extremely motivated to do research in areas that EAs are interested in and create technology to mitigate disasters.