Thanks for this substantive and useful post. We’ve looked at this topic every few years in unpublished work at FHI to think about whether to prioritize it. So far it hasn’t looked promising enough to pursue very heavily, but I think more careful estimates of the inputs and productivity of research in the field (for forecasting relevant timelines and understanding the scale of the research) would be helpful. I’ll also comment on a few differences between the post and my models of BCI issues:
It does not seem a safe assumption to me that AGI is more difficult than effective mind-reading and control, since the latter requires complex interface with biology with large barriers to effective experimentation; my guess is that this sort of comprehensive regime of BCI capabilities will be preceded by AGI, and your estimate of D is too high
The idea that free societies never stabilize their non-totalitarian character, so that over time stable totalitarian societies predominate, leaves out the applications of this and other technologies to stabilizing other societal forms (e.g. security forces making binding oaths to principles of human rights and constitutional government, backed by transparently inspected BCI, or introducing AI security forces designed with similar motivations), especially if the alternative is predictably bad; also other technologies like AGI would come along before centuries of this BCI dynamic
Global dominance is blocked by nuclear weapons, but dominance of the long-term future through a state that is a large chunk of the world outgrowing the rest (e.g. by being ahead in AI or space colonization once economic and military power is limited by resources) is more plausible, and S is too low
I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term), but BCI that actually was highly effective for mind-reading would make international deals on WMD or AGI racing much more enforceable, as national leaders could make verifiable statements that they have no illicit WMD programs or secret AGI efforts, or that joint efforts to produce AGI with specific objectives are not being subverted; this seems to be potentially an enormous factor
Lie detection via neurotechnology, or mind-reading complex thoughts, seems quite difficult, and faces structural issues in that the representations for complex thoughts are going to be developed idiosyncratically in each individual, whereas things like optic nerve connections and the lower levels of V1 can be tracked by their definite inputs and outputs, shared across humans
I haven’t seen any great intervention points here for the downsides, analogous to alignment work for AI safety, or biosecurity countermeasures against biological weapons;
If one thought BCI technology was net helpful one could try to advance it, but it’s a moderately large and expensive field so one would likely need to leverage by advocacy or better R&D selection within the field to accelerate it enough to matter and be competitive with other areas of x-risk reduction activity
I think if you wanted to get more attention on this, likely the most effective thing to do would be a more rigorous assessment of the technology and best efforts nuts-and-bolts quantitative forecasting (preferably with some care about infohazards before publication). I’d be happy to give advice and feedback if you pursue such a project.
Thanks for this substantive and useful post. We’ve looked at this topic every few years in unpublished work at FHI to think about whether to prioritize it. So far it hasn’t looked promising enough to pursue very heavily, but I think more careful estimates of the inputs and productivity of research in the field (for forecasting relevant timelines and understanding the scale of the research) would be helpful. I’ll also comment on a few differences between the post and my models of BCI issues:
It does not seem a safe assumption to me that AGI is more difficult than effective mind-reading and control, since the latter requires complex interface with biology with large barriers to effective experimentation; my guess is that this sort of comprehensive regime of BCI capabilities will be preceded by AGI, and your estimate of D is too high
The idea that free societies never stabilize their non-totalitarian character, so that over time stable totalitarian societies predominate, leaves out the applications of this and other technologies to stabilizing other societal forms (e.g. security forces making binding oaths to principles of human rights and constitutional government, backed by transparently inspected BCI, or introducing AI security forces designed with similar motivations), especially if the alternative is predictably bad; also other technologies like AGI would come along before centuries of this BCI dynamic
Global dominance is blocked by nuclear weapons, but dominance of the long-term future through a state that is a large chunk of the world outgrowing the rest (e.g. by being ahead in AI or space colonization once economic and military power is limited by resources) is more plausible, and S is too low
I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term), but BCI that actually was highly effective for mind-reading would make international deals on WMD or AGI racing much more enforceable, as national leaders could make verifiable statements that they have no illicit WMD programs or secret AGI efforts, or that joint efforts to produce AGI with specific objectives are not being subverted; this seems to be potentially an enormous factor
Lie detection via neurotechnology, or mind-reading complex thoughts, seems quite difficult, and faces structural issues in that the representations for complex thoughts are going to be developed idiosyncratically in each individual, whereas things like optic nerve connections and the lower levels of V1 can be tracked by their definite inputs and outputs, shared across humans
I haven’t seen any great intervention points here for the downsides, analogous to alignment work for AI safety, or biosecurity countermeasures against biological weapons;
If one thought BCI technology was net helpful one could try to advance it, but it’s a moderately large and expensive field so one would likely need to leverage by advocacy or better R&D selection within the field to accelerate it enough to matter and be competitive with other areas of x-risk reduction activity
I think if you wanted to get more attention on this, likely the most effective thing to do would be a more rigorous assessment of the technology and best efforts nuts-and-bolts quantitative forecasting (preferably with some care about infohazards before publication). I’d be happy to give advice and feedback if you pursue such a project.