I havenāt had a chance to read this post yet, but just wanted to mention one paper I know of that does discuss brain-computer interfaces in the context of global catastrophic risks, which therefore might be interesting to you or other readers. (The paper doesnāt use the term existential risk, but I think the basic points could be extrapolated to them.)
Beyond the risks associated with medical device exploitation, it is possible that in the future computer systems will be integrated with human physiology and, therefore, pose novel vulnerabilities. Brain-computer interfaces (BCIs), traditionally used in medicine for motor-neurological disorders, are AI systems that allow for direct communication between the brain and an external computer. BCIs allow for a bidirectional flow of information, meaning the brain can receive signals from an external source and vice versa.
The neurotechnology company, Neuralink, has recently claimed that a monkey was able to control a computer using one of their implants. This concept may seem farfetched, but in 2004 a paralyzed man with an implanted BCI was able to play computer games and check email using only his mind. Other studies have shown a āābrainbraināā interface between mammals is possible. In 2013, one researcher at the University of Washington was able to send a brain signal captured by electroencephalography over the internet to control the hand movements of another by way of transcranial magnetic stimulation. Advances are occurring at a rapid pace and many previous technical bottlenecks that have prevented BCIs from widespread implementation are beginning to be overcome.
Research and development of BCIs have accelerated quickly in the past decade. Future directions seek to achieve a symbiosis of AI and the human brain for cognitive enhancement and rapid transfer of information between individuals or computer systems. Rather than having to spend time looking up a subject, performing a calculation, or even speaking to another individual, the transfer of information could be nearly instantaneous. There have already been numerous studies conducted researching the use of BCIs for cognitive enhancement in domains such as learning and memory, perception, attention, and risk aversion (one being able to incite riskier behavior). Additionally, studies have explored the military applications of BCIs, and the field receives a bulk of its funding from US Department of Defense sources such as the Defense Advanced Research Projects Agency.
While the commercial implementation of BCIs may not occur until well into the future, it is still valuable to consider the risks that could arise in order to highlight the need for security-by-design thinking and avoid path dependency, which could result in vulnerabilitiesālike those seen with current medical devicesāpersisting in future implementations. Cyber vulnerabilities in current BCIs have already been identified, including those that could cause physical harm to the user and influence behavior. In a future where BCIs are commonplace alongside advanced understandings of neuroscience, it may be possible for a bad actor to achieve limited influence over the behavior of a population or cause potential harm to users. This issue highlights the need to have robust risk assessment prior to widespread technological adoption, allowing for regulation, governance, and security measures to take identified concerns into account.
I havenāt had a chance to read this post yet, but just wanted to mention one paper I know of that does discuss brain-computer interfaces in the context of global catastrophic risks, which therefore might be interesting to you or other readers. (The paper doesnāt use the term existential risk, but I think the basic points could be extrapolated to them.)
The paper is Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology. Iāll quote the most relevant section of text below, but table 4 is also relevant, and the paper is open-access and (in my view) insightful, so Iād recommend reading the whole thing.