This post is based on content originally published in the book AGE OF BCI: Existential Risks, Opportunities, Pathways. The full book is available for free to download and share by anyone under a Creative Commons BY-NC-ND license. I encourage to read, explore and discuss broader research and new concepts presented there. In case of any questions, suggestions or ideas, feel free to contact me.
I. Introduction
Before outlining the issues related to the development of IA, it’s necessary to summarize first the main concerns strictly related to further progress in the field of AI:
There is currently no consensus on when the AI that significantly surpasses human intelligence in all fields of activity will emerge. However, observing the current progress based on a risky technological race, we should consider as possible scenario in which AI (conscious or unconscious) will be able to surpass humans and, in the longer term, lead to the reduction of our freedom or even the elimination of part or all of our species.
We should also consider the possible scenarios in which powerful AI can be used by a narrow group of people (e.g., a terrorist organization, a totalitarian government) to achieve particular goals, which are misaligned with the common good of all humanity.
Accordingly, we should do everything in our power to reduce the risk of these scenarios occurring by investing our time and resources in the mechanisms that may help in minimizing those threats.
Despite all countermeasures, we may struggle to control all activities aimed at building increasingly advanced and potentially dangerous to humanity AI systems. This is because it’s very difficult to control actions of all the groups that may work on the development of powerful AI.
Moreover, despite the best intentions of the designers of the currently-developed and implemented security mechanisms and the huge amount of their work, such systems may prove insufficient in the face of powerful superintelligence despite the best intentions of their designers and the huge amount of work.
Therefore, we’re seeking additional ideas that would help reduce the risks associated with AI development.
This is the point in which the concept of IA and Brain-Computer Interface (BCI) implants emerges. Because evolutionary processes are too slow compared to the high dynamics of AI development, the only way to keep control on synthetic intelligence may be connection of homo sapiens’ brains with external systems that enhance our intelligence.
At this point, I’d like to introduce a new group of problems, strictly related to the IA concept, especially using BCI and present them firstly in a general form. In the next sections, I’ll describe details, implications, and final conclusions.
Despite the arguments in favor of IA development in odrer to compete with AI, it’s important to realize that person equipped with a sufficiently advanced IA system based on high bandwidth BCI implants may become a high-intelligence entity able to surpass non-IA humans.
Unlike AI, the problem of consciousness arising is irrelevant to the situation when a human, by essence a conscious being, will be supported by such IA potential.
While in the case of AI, we can at least try to ensure that its nature is designed to be as friendly to our species as possible (currently we’re investing significant resources towards this), we have no basis to assume and predict what the intentions, emotional states, and judgements of the people equipped with powerful IA capabilities will be and how they can evolve even in a short period of time.
Given the above, if the entity/entities supported by powerful IA technology have values and goals that aren’t aligned with the generally perceived social good (right from the start or later on), they may pose an existential threat to part of or entire humanity.
II. New Arms Race
In the times of rising international tensions, there’s a risk that development of IA based on BCI technology will become a field of a new technological arms race. As in the case of the AI threat, an entity (e.g., a government elite, a terrorist organization, or some other group of people) that first implement efficient solutions may be able to gain exponentially increasing advantage in more and more areas over time. This can mean, among others, an edge in civilian and military technologies. Eventually, it can lead to the supremacy over other entities in any field.
As an possible scenario, let’s consider the leader (or party elite) of a totalitarian country that tries to use IA technologies to consolidate and expand their regional dominance. Such individuals may wish to increase their intelligence to an extent far beyond their current level. They may be willing to invest vast resources in building research centers for the development of IA technologies as well as to copy existing IA solutions through industrial espionage and then improve them. Since some governments are currently using significant amounts of resources in the development of nuclear weapons, they’ll surely be able to invest in a much more versatile and powerful technology to expand their influence and supremacy. The cost of such an endeavor seems to be extremely low in comparison with the unimaginable benefits which it can achieve.
As in the case of AI development, the works on IA may take place in a strictly secret manner. Any unnecessary interest and objections from the public may slow down the work or bring it to a temporary or even complete halt, for instance as a result of protests, public pressure, sabotage, or even military intervention. Not arousing the suspicions and concerns of foreign governments or the broader public may be in the best interest of the entities that pursue that development. The non-public approach can significantly improve the efficiency of the development and provide the most comfortable working conditions as possible. Ultimately, such way can ensure that IA technology is developed as quickly as possible, providing increasing advantages over the opponents.
III. Omission of Security Measures
Many of the safeguards we are currently developing to design the safest AI systems possible may be intentionally abandoned in case of the IA development. In the best interest of some owners may be intentionally use IA technology without the safeguards aimed to prevent actions contrary to values and expectations of general public. We should assume as possible that, the elite of a totalitarian regime or a terrorist organization may conclude they don’t need to invest in this area because it’s not important from their point of view or it’s even an obstacle to achieving its specific goals. Such shortcuts may take place, among others, in the case of an important safeguard such as “Explainable Neural Networks” as well as of all kinds of approaches aimed at implementing “Embedded Values”. In the case of the IA technology which would be widely and evenly available in society, the above-mentioned solutions can be developed as strongly desired. Unfortunately, from the perspective of entities that try to create powerful IA solely for their own purposes, the implementation of such safeguards may be undesirable. The main argument can be as follows: There is no need to invest in all these safeties, as in the end it’s my intelligence that will be extended. I don’t have to worry about a super intelligent AI; after all, I’ll be that powerful entity.
Maintaining safety can be also highly problematic for another, very important class of safeguards, which are focused on isolating and limiting interaction with the external world, the so-called AI Box. It should be noted that in the case of IA, such kinds of solutions have no chance of fulfilling their role by default. This is because an entity (one person or a group) supported by IA potential can freely communicate and interact with the world. Moreover, the entities with significant resources to develop such ground-breaking technology, have likely much greater power (even before the use of IA) to influence the world than the average member of society or even larger groups such as small or medium countries. In this case, the entity exploiting the potential of IA is not only not isolated, but already in an extremely privileged starting position to achieve its goals.
It’s hard to assume the implementation of safeguards by a single person or narrow group who may think that knows best what the world should look like. Also worrying is that if safeguards are abandoned, it can considerably accelerate the implementation of advanced IA. Such a strategy can also reduce the cost of implementing the entire project. This can be another argument for taking a shorter, but much more dangerous path for the public. Taking shortcuts will be tempting not only for the entities whose values and goals are questionable, but also for some groups with a utilitarian and democratic approach to building and using IA. In this case, the reason may be the enormous pressure to win the race over another entity whose progress in the AI/IA development can lead to unpredictable, potentially highly risky acts.
IV. Selective Distribution Within Society
Selective Distribution due to Costs:
It takes time for the most of new inventions to become widely distributed and available in a specific region and even much more time to be broadly accessible around to globe. In the case of advanced, cutting-edge technologies, this period may take a few years in rich countries in optimal scenario. In turn, in less-developed countries, it can be much longer. Supposing that, we achieved sufficiently advanced IA solution based on BCI technology to significantly increase human intelligence. The following question needs to be answered: Who will have the priority access to enhanced intelligence using IA? People with the lowest intelligence and the worst living conditions to even out their ability to compete with others in society? Or rather the wealthiest, as is the case with almost every new cutting-edge technology? Or maybe the elite of the totalitarian country in which IA is highly advanced?
In the “natural” circumstances of slow adoption of technology within society, other important questions arise: What will be the relation of those who use IA to the rest of society? How will people without this technology feel about it when coexisting with those supported with IA capabilities? What impact will this have on their sense of worth and competitiveness in the face of the increasing dominance of people with enhanced intelligence? IA technology can provide significant and growing advantages over time in any field of human activity for those who will be privileged to use its potential. This situation may lead in the coming years to widening the differences and tensions in society, ultimately bringing new, serious social conflicts locally as well as between global entities.
Selective Distribution due to Computing Power Limitations:
Let’s assume for the moment that, contrary to reasonable predictions, we’ll succeed in making the IA based on BCI technology available for all willing people (e.g., one billion people) in a relatively short time (e.g., one year). In such a case, we’ll face other significant concerns. How will human intelligence based on IA and distributed among numerous users compete with AI, which can be much more consolidated in terms of computing power? It’s important to keep in mind that powerful AI systems can be highly focused on narrowly defined, potentially dangerous goals for the humanity. This problem may apply to the AI that got out of human control and acts independently of their will. It can also refer to the AI used by some humans (e.g., the elite of a totalitarian state or a terrorist group) who want to achieve their particularistic goals. The effectiveness of powerful AI focused on narrow goals can be higher. As a result, the broad distribution of IA in society may not be sufficiently competitive with consolidated systems.
In the face of the above-mentioned danger, democratic societies may seek to adopt a different strategy for distributing intelligence. They may enhance a specific group of individuals, e.g., a democratically elected party or the military to maintain security and counter the growing threats from consolidated AI systems in the hands of hostile entities. In this strategy, privileged individuals can take advantage of powerful intelligence potential not accessible for the rest of citizens. However, this raises further important questions. Will they use the powerful potential of intelligence predictably and beneficially for the citizens of their country and all humanity? Will such individuals be able to abandon their enhanced intelligence if society so decides?
These risks associated with the direction of the potential use of IA, both in totalitarian and democratic countries, can lead to increased social tensions in the coming years. At a later stage, it can cause local as well as international conflicts. These tensions will also negatively impact the intensification of the arms race and the secrecy of AI and IA development.
V. Evolution of Values and Goals
Given the limited resources at our disposal, we may come to the decision that a group of competent and moral persons in society must be chosen to contain the dangers caused by consolidated intelligence. In this case, another question arises: How do we judge values and goals of the chosen persons? Even if we select the right persons with an impeccable reputation and good intentions, can we be sure that their values and goals won’t change in the near future? It may happen that IA will be applied to someone who is initially very empathetic, utilitarian, and aligned with the values and goals of humanity. However, with significantly enhanced cognitive abilities, they can change their views and attitudes toward some or all of humanity in a short period of time. We have no way of being certain whether a person or a group with powerful intelligence capabilities won’t change their goals and attitudes toward other people even if they didn’t suspect it beforehand. A similar process of rapidly changing values has already taken place in dynamically learning and, consequently, evolving AI systems. It’s hard to ensure that people who seemed to be the right choice for using powerful IA won’t become what we fear in the context of the development of advanced AI algorithms – powerful superintelligence with misaligned values and goals towards the rest of humanity.
VI. Summary
In the present literature on existential risks, one of the most likely dangers to occur is almost always the AI that is misaligned with human values. As presented above, the risks associated with the development of IA concept, especially based on high bandwidth BCI technology appear to be at least as high. Among the most serious hazards that need to be immediately considered are:
The growing competition between countries as well as private entities that can start a new arms race. The risk of non-public development – both in the case of totalitarian and democratic systems.
Omission of safeguards – can take place during the development process both in totalitarian (achieving goals of the elite) and democratic societies (compromises in safety area in the face of threats from other countries).
Selective distribution of IA power within society depending on a privileged social position or only within some country.
Limited IA power per person in case of widely distributed strategy in contrast to the consolidated approach (eg. powerful IA used only by a narrow group of people or by concentrated and liberated AI).
Unpredictable, potentially very quick evolution of values and goals of the entities using enough powerful IA.
In general: the use of advanced IA by narrow group of people (both in totalitarian or democratic countries) in a way that’s highly undesirable with the expectations of the broad public.
Further development of IA concept can bring far different results than expected. Ultimately, it can lead to a situation in which a new, powerful entity or a group of entities can claim the right to arrange our world as they see fit. Confronted by this new existential risk, our countermeasures should be at least as intense in their scope as those employed to prevent the emergence of the traditionally understood AI. We should undertake extensive action as soon as possible to reduce this risk. Firstly, it’s essential to make as many people as possible aware of the problem without delay. Secondly, it’s necessary to agree on the most important risk factors and implement as effective a strategy as possible to counter the threats from this new direction.
Existential Risk of Misaligned Intelligence Augmentation (Particularly Using High-Bandwidth BCI Implants)
General Note
This post is based on content originally published in the book AGE OF BCI: Existential Risks, Opportunities, Pathways. The full book is available for free to download and share by anyone under a Creative Commons BY-NC-ND license. I encourage to read, explore and discuss broader research and new concepts presented there. In case of any questions, suggestions or ideas, feel free to contact me.
I. Introduction
Before outlining the issues related to the development of IA, it’s necessary to summarize first the main concerns strictly related to further progress in the field of AI:
There is currently no consensus on when the AI that significantly surpasses human intelligence in all fields of activity will emerge. However, observing the current progress based on a risky technological race, we should consider as possible scenario in which AI (conscious or unconscious) will be able to surpass humans and, in the longer term, lead to the reduction of our freedom or even the elimination of part or all of our species.
We should also consider the possible scenarios in which powerful AI can be used by a narrow group of people (e.g., a terrorist organization, a totalitarian government) to achieve particular goals, which are misaligned with the common good of all humanity.
Accordingly, we should do everything in our power to reduce the risk of these scenarios occurring by investing our time and resources in the mechanisms that may help in minimizing those threats.
Despite all countermeasures, we may struggle to control all activities aimed at building increasingly advanced and potentially dangerous to humanity AI systems. This is because it’s very difficult to control actions of all the groups that may work on the development of powerful AI.
Moreover, despite the best intentions of the designers of the currently-developed and implemented security mechanisms and the huge amount of their work, such systems may prove insufficient in the face of powerful superintelligence despite the best intentions of their designers and the huge amount of work.
Therefore, we’re seeking additional ideas that would help reduce the risks associated with AI development.
This is the point in which the concept of IA and Brain-Computer Interface (BCI) implants emerges. Because evolutionary processes are too slow compared to the high dynamics of AI development, the only way to keep control on synthetic intelligence may be connection of homo sapiens’ brains with external systems that enhance our intelligence.
At this point, I’d like to introduce a new group of problems, strictly related to the IA concept, especially using BCI and present them firstly in a general form. In the next sections, I’ll describe details, implications, and final conclusions.
Despite the arguments in favor of IA development in odrer to compete with AI, it’s important to realize that person equipped with a sufficiently advanced IA system based on high bandwidth BCI implants may become a high-intelligence entity able to surpass non-IA humans.
Unlike AI, the problem of consciousness arising is irrelevant to the situation when a human, by essence a conscious being, will be supported by such IA potential.
While in the case of AI, we can at least try to ensure that its nature is designed to be as friendly to our species as possible (currently we’re investing significant resources towards this), we have no basis to assume and predict what the intentions, emotional states, and judgements of the people equipped with powerful IA capabilities will be and how they can evolve even in a short period of time.
Given the above, if the entity/entities supported by powerful IA technology have values and goals that aren’t aligned with the generally perceived social good (right from the start or later on), they may pose an existential threat to part of or entire humanity.
II. New Arms Race
In the times of rising international tensions, there’s a risk that development of IA based on BCI technology will become a field of a new technological arms race. As in the case of the AI threat, an entity (e.g., a government elite, a terrorist organization, or some other group of people) that first implement efficient solutions may be able to gain exponentially increasing advantage in more and more areas over time. This can mean, among others, an edge in civilian and military technologies. Eventually, it can lead to the supremacy over other entities in any field.
As an possible scenario, let’s consider the leader (or party elite) of a totalitarian country that tries to use IA technologies to consolidate and expand their regional dominance. Such individuals may wish to increase their intelligence to an extent far beyond their current level. They may be willing to invest vast resources in building research centers for the development of IA technologies as well as to copy existing IA solutions through industrial espionage and then improve them. Since some governments are currently using significant amounts of resources in the development of nuclear weapons, they’ll surely be able to invest in a much more versatile and powerful technology to expand their influence and supremacy. The cost of such an endeavor seems to be extremely low in comparison with the unimaginable benefits which it can achieve.
As in the case of AI development, the works on IA may take place in a strictly secret manner. Any unnecessary interest and objections from the public may slow down the work or bring it to a temporary or even complete halt, for instance as a result of protests, public pressure, sabotage, or even military intervention. Not arousing the suspicions and concerns of foreign governments or the broader public may be in the best interest of the entities that pursue that development. The non-public approach can significantly improve the efficiency of the development and provide the most comfortable working conditions as possible. Ultimately, such way can ensure that IA technology is developed as quickly as possible, providing increasing advantages over the opponents.
III. Omission of Security Measures
Many of the safeguards we are currently developing to design the safest AI systems possible may be intentionally abandoned in case of the IA development. In the best interest of some owners may be intentionally use IA technology without the safeguards aimed to prevent actions contrary to values and expectations of general public. We should assume as possible that, the elite of a totalitarian regime or a terrorist organization may conclude they don’t need to invest in this area because it’s not important from their point of view or it’s even an obstacle to achieving its specific goals. Such shortcuts may take place, among others, in the case of an important safeguard such as “Explainable Neural Networks” as well as of all kinds of approaches aimed at implementing “Embedded Values”. In the case of the IA technology which would be widely and evenly available in society, the above-mentioned solutions can be developed as strongly desired. Unfortunately, from the perspective of entities that try to create powerful IA solely for their own purposes, the implementation of such safeguards may be undesirable. The main argument can be as follows: There is no need to invest in all these safeties, as in the end it’s my intelligence that will be extended. I don’t have to worry about a super intelligent AI; after all, I’ll be that powerful entity.
Maintaining safety can be also highly problematic for another, very important class of safeguards, which are focused on isolating and limiting interaction with the external world, the so-called AI Box. It should be noted that in the case of IA, such kinds of solutions have no chance of fulfilling their role by default. This is because an entity (one person or a group) supported by IA potential can freely communicate and interact with the world. Moreover, the entities with significant resources to develop such ground-breaking technology, have likely much greater power (even before the use of IA) to influence the world than the average member of society or even larger groups such as small or medium countries. In this case, the entity exploiting the potential of IA is not only not isolated, but already in an extremely privileged starting position to achieve its goals.
It’s hard to assume the implementation of safeguards by a single person or narrow group who may think that knows best what the world should look like. Also worrying is that if safeguards are abandoned, it can considerably accelerate the implementation of advanced IA. Such a strategy can also reduce the cost of implementing the entire project. This can be another argument for taking a shorter, but much more dangerous path for the public. Taking shortcuts will be tempting not only for the entities whose values and goals are questionable, but also for some groups with a utilitarian and democratic approach to building and using IA. In this case, the reason may be the enormous pressure to win the race over another entity whose progress in the AI/IA development can lead to unpredictable, potentially highly risky acts.
IV. Selective Distribution Within Society
Selective Distribution due to Costs:
It takes time for the most of new inventions to become widely distributed and available in a specific region and even much more time to be broadly accessible around to globe. In the case of advanced, cutting-edge technologies, this period may take a few years in rich countries in optimal scenario. In turn, in less-developed countries, it can be much longer. Supposing that, we achieved sufficiently advanced IA solution based on BCI technology to significantly increase human intelligence. The following question needs to be answered: Who will have the priority access to enhanced intelligence using IA? People with the lowest intelligence and the worst living conditions to even out their ability to compete with others in society? Or rather the wealthiest, as is the case with almost every new cutting-edge technology? Or maybe the elite of the totalitarian country in which IA is highly advanced?
In the “natural” circumstances of slow adoption of technology within society, other important questions arise: What will be the relation of those who use IA to the rest of society? How will people without this technology feel about it when coexisting with those supported with IA capabilities? What impact will this have on their sense of worth and competitiveness in the face of the increasing dominance of people with enhanced intelligence? IA technology can provide significant and growing advantages over time in any field of human activity for those who will be privileged to use its potential. This situation may lead in the coming years to widening the differences and tensions in society, ultimately bringing new, serious social conflicts locally as well as between global entities.
Selective Distribution due to Computing Power Limitations:
Let’s assume for the moment that, contrary to reasonable predictions, we’ll succeed in making the IA based on BCI technology available for all willing people (e.g., one billion people) in a relatively short time (e.g., one year). In such a case, we’ll face other significant concerns. How will human intelligence based on IA and distributed among numerous users compete with AI, which can be much more consolidated in terms of computing power? It’s important to keep in mind that powerful AI systems can be highly focused on narrowly defined, potentially dangerous goals for the humanity. This problem may apply to the AI that got out of human control and acts independently of their will. It can also refer to the AI used by some humans (e.g., the elite of a totalitarian state or a terrorist group) who want to achieve their particularistic goals. The effectiveness of powerful AI focused on narrow goals can be higher. As a result, the broad distribution of IA in society may not be sufficiently competitive with consolidated systems.
In the face of the above-mentioned danger, democratic societies may seek to adopt a different strategy for distributing intelligence. They may enhance a specific group of individuals, e.g., a democratically elected party or the military to maintain security and counter the growing threats from consolidated AI systems in the hands of hostile entities. In this strategy, privileged individuals can take advantage of powerful intelligence potential not accessible for the rest of citizens. However, this raises further important questions. Will they use the powerful potential of intelligence predictably and beneficially for the citizens of their country and all humanity? Will such individuals be able to abandon their enhanced intelligence if society so decides?
These risks associated with the direction of the potential use of IA, both in totalitarian and democratic countries, can lead to increased social tensions in the coming years. At a later stage, it can cause local as well as international conflicts. These tensions will also negatively impact the intensification of the arms race and the secrecy of AI and IA development.
V. Evolution of Values and Goals
Given the limited resources at our disposal, we may come to the decision that a group of competent and moral persons in society must be chosen to contain the dangers caused by consolidated intelligence. In this case, another question arises: How do we judge values and goals of the chosen persons? Even if we select the right persons with an impeccable reputation and good intentions, can we be sure that their values and goals won’t change in the near future? It may happen that IA will be applied to someone who is initially very empathetic, utilitarian, and aligned with the values and goals of humanity. However, with significantly enhanced cognitive abilities, they can change their views and attitudes toward some or all of humanity in a short period of time. We have no way of being certain whether a person or a group with powerful intelligence capabilities won’t change their goals and attitudes toward other people even if they didn’t suspect it beforehand. A similar process of rapidly changing values has already taken place in dynamically learning and, consequently, evolving AI systems. It’s hard to ensure that people who seemed to be the right choice for using powerful IA won’t become what we fear in the context of the development of advanced AI algorithms – powerful superintelligence with misaligned values and goals towards the rest of humanity.
VI. Summary
In the present literature on existential risks, one of the most likely dangers to occur is almost always the AI that is misaligned with human values. As presented above, the risks associated with the development of IA concept, especially based on high bandwidth BCI technology appear to be at least as high. Among the most serious hazards that need to be immediately considered are:
The growing competition between countries as well as private entities that can start a new arms race. The risk of non-public development – both in the case of totalitarian and democratic systems.
Omission of safeguards – can take place during the development process both in totalitarian (achieving goals of the elite) and democratic societies (compromises in safety area in the face of threats from other countries).
Selective distribution of IA power within society depending on a privileged social position or only within some country.
Limited IA power per person in case of widely distributed strategy in contrast to the consolidated approach (eg. powerful IA used only by a narrow group of people or by concentrated and liberated AI).
Unpredictable, potentially very quick evolution of values and goals of the entities using enough powerful IA.
In general: the use of advanced IA by narrow group of people (both in totalitarian or democratic countries) in a way that’s highly undesirable with the expectations of the broad public.
Further development of IA concept can bring far different results than expected. Ultimately, it can lead to a situation in which a new, powerful entity or a group of entities can claim the right to arrange our world as they see fit. Confronted by this new existential risk, our countermeasures should be at least as intense in their scope as those employed to prevent the emergence of the traditionally understood AI. We should undertake extensive action as soon as possible to reduce this risk. Firstly, it’s essential to make as many people as possible aware of the problem without delay. Secondly, it’s necessary to agree on the most important risk factors and implement as effective a strategy as possible to counter the threats from this new direction.