Similar to the Prisoner’s Dilemma in game theory, I will establish a thought model for the dilemma of ultimate technology.
1)There are two individuals, A and B.
2)A and B each have two options, and they don’t know which option the other has chosen:
Option 1: To proceed with the development of a certain technology.
Option 2: To stop the development of a certain technology.
3)Here are the known factors:
Rule a: If both choose Option 2 (to stop technological development), nothing happens.
Rule b: If both choose Option 1 (to proceed with technological development), their technological levels balance out and continue to elevate.
Rule c: If one chooses Option 1 and the other chooses Option 2, the one who chose Option 1 will have a higher technological level, upsetting the balance of power, and may eventually dominate the one who chose Option 2.
Rule d: If the technology level increases, there’s a possibility that at some point the technology will go out of control or become uncontrollable, potentially leading to the death of both parties. However, it might not go out of control. This probability is unknown, but according to a comprehensive analysis of expert opinions, it is estimated to be about 50%.
Moreover, let’s add the following settings:
Based on past experiences, A and B are known to have a contentious relationship with each other, and they are both aware of this fact.
On the other hand, from past experiences, A and B understand that neither of them prefers to recklessly risk their lives. This understanding applies to their own selves as well as their perception of the other. This is because they exist on the basis of mutually assured destruction—if one tries to take the other’s life, both lives may be lost.
Let’s also add the following settings:
A and B each have their respective allies, and A and B are their representatives.
If a technological outburst occurs, not only the representatives but also the allies will die.
If they are dominated by the opponent, not only the representatives but also the allies will be dominated.
It’s unknown to A and B how their allies will react. If the allies are dissatisfied with the choices of A and B, there’s a risk that they may lose their representative rights.
The allies know these rules. They don’t want to be dominated by the opponent, but they want to avoid the risk of death even more.
2. The Resolution to Speculate on Others
An important consideration when thinking about this complex dilemma of ultimate technology is whether one can focus on the fact that, if the primary goal is to avoid the risk of dying oneself, the choices of the other party are irrelevant.
With this focus, one realizes that, ultimately, one is required to be prepared to choose option 2.
The choices of the opponent and the intentions of one’s allies do have an impact on the outcome, but they actually do not have any bearing on one’s own decision. To prevent the death of oneself and one’s allies, there is no option but to choose option 2. One might be dominated by the opponent, or one might be stripped of one’s representative rights by one’s allies. However, the focus is on whether one can weigh this against the death of oneself and one’s allies.
In the end, if one cannot focus on this, one is at risk of losing everything. This is because this dilemma is not just about this one technology. It might be a dilemma that could arise in the future. Even if one technology, albeit with a 50% chance, allows one to avoid death, if many technologies emerge, continuing to choose option 1 will eventually lead to failure.
Therefore, if you are going to gamble, you have no choice but to bet on option 2 from the start. You can only hope that, having bet on option 2, the other party will also decide on option 2, and that both parties will not be stripped of their leadership roles by their allies. All other paths lead to a future where everyone eventually dies.
What this issue paints is a situation where, in one’s decisions, the best cannot be chosen, and one always ends up depending on the choices of others. And, if one cannot accept this reality and desires to determine one’s future with one’s own decisions, one’s future will certainly be jeopardized.
In this situation, the question is whether one can take what is called “speculation on others,” which is to entrust one’s fate to others, including one’s adversary with whom one is in conflict and one’s allies whose thoughts are unknown, instead of determining one’s own fate. Normally, speculation on others involves a matter of trust in those others, but the situation of the dilemma of ultimate technology leads to speculation on others rationally, given the understanding of the situation itself.
And, everyone involved in this situation, including the other party, one’s allies, and the other party’s allies, is forced into “speculation on others.”
This “speculation on others” is a matter of trust that others have the desire for self-preservation and can make rational decisions, irrespective of whether the parties involved have compassion for others or want to help their comrades.
Therefore, the dilemma of ultimate technology requires all parties involved to have the desire for self-preservation and the ability to make rational and long-term judgments.
This suggests that even if the disclosure of the existence of ultimate technology is hesitant due to concerns about causing temporary panic, there is no choice but to disclose it and encourage rational decision-making.
3. Extension of the Dilemma of Ultimate Technology
If we further expand the model of the dilemma of ultimate technology, the situation becomes clearer.
The initial situation involved two parties, A and B. We increase the number of groups of leaders and allies to C, D, E, and so on. This includes groups that are indirectly protected, as well as groups that are not involved at all, in addition to the direct parties to mutual assured destruction.
In this case, it may not be possible to expect all groups to make the same judgment when choosing to speculate on others.
Therefore, additional options become available. That is, alliances and exclusions.
You sort out the groups that are likely to have a desire for self-preservation and make rational decisions, and those that may not. At that time, it does not matter whether there is conflict. Even if there is conflict, you join hands with the entity that has a high likelihood of having a desire for self-preservation and making rational decisions, and form an alliance. Then, using the power of that alliance, you impose restrictions on other entities in the form of pressure, surveillance, and deprivation of resources for technology development and operation.
In order to avoid these restrictions, each entity has no choice but to join the alliance. Entities that refuse to join the alliance, or entities that cannot join, will be subject to strong restrictions.
4. What the Dilemma of Ultimate Technology Requires
Normally, it is hard to imagine hostile groups joining hands.
In fact, in the case of nuclear weapons, which are close to ultimate technology, the situation led to the East-West Cold War. However, nuclear weapons were easy to manage to prevent detonation, so the situation was different from that of ultimate technology.
Also, the exclusion of a part of the group is a debate that raises ethical questions from the perspective of humanity and basic human rights.
However, when the situation of annihilation by ultimate technology is envisaged, these “normal” concepts can be overturned.
As we have seen here, in the dilemma of ultimate technology, only groups that think rationally about their own survival can exist in the international community. This is exclusion, but on the other hand, it is simply requiring that they be groups with a very natural human desire. It does not require adherence to some common good or international agreement. Even if the culture, internal politics, and ideology are different, they can have the commonality of “thinking rationally about their own survival.”
And, to begin with, groups that cannot “think rationally about their own survival” are not groups that can survive long in a harsh international environment, regardless of the presence or absence of this dilemma. From this, such groups should hardly exist normally, and if they do exist, they are thought to be accidental and temporary occurrences.
Therefore, the dilemma of ultimate technology brings two things to the international community. The first is a value system that places the highest priority on “thinking rationally about one’s own survival.” This, in turn, translates to a value system that does not interfere with, intimidate, or attack the survival desires of other groups. This is the second thing brought to the international community. Until now, this value system had been a matter of international ethics and lip service, but the dilemma of ultimate technology creates a situation where one’s own survival is at risk if one cannot uphold this value system.
On that basis, the international community carries out two tasks based on the alliance formed by this new value system. One is monitoring, persuasion, and technology-related regulation of groups outside the alliance. The other is the establishment of an audit system for the state of technology development within the alliance. Since the groups within the alliance have decided to accept option 2, they maintain their innocence and build trust within the alliance by adhering to the rules and conducting audits regularly.
5. The Scene Depicted by the Dilemma of Ultimate Technology
This dilemma appears to be a new threat, a complex problem that makes one want to pull one’s hair out, and a situation that further complicates the international community.
However, as one organizes the dilemma of ultimate technology, one comes across three somewhat strange scenes.
The first is that it culminates in a way of thinking that, instead of urging one to strengthen ethics such as cooperation and trust with others, overall benefit, and self-sacrifice, encourages one to have the ultimately selfish goal of one’s own survival.
The second is that, instead of a strategy of confusing the opponent’s thoughts while concealing one’s own important information, it requires asking the opponent to think seriously, deeply, and from a long-term perspective, with the information open.
And the third is that, as long as the other party understands the two points above, even if the other aspects such as ideology, purpose, values, and culture are different, it is possible to join hands up to the point where this ultimate dilemma can be managed.
This is demanded of all entities, whether they are antagonistic entities, entities within the opposing group, or ordinary citizens within one’s own group.
Here, a scene different from the world of hegemony and power balance unfolds.
The new, complicated, complex, and hugely impactful problem of the dilemma of ultimate technology, which involves many stakeholders, oddly culminates in a simple and beautiful idea that strongly recommends respecting diversity and individual and individual cultures and thinking solidly with one’s own will.
Am I dreaming? Is it possible that the ideal society that I subconsciously envision is being projected onto this discussion? I thought I was just digging deep into the theory of the dilemma of ultimate technology based on sincere realism.
6. In Conclusion
I am somewhat skeptical of my own logic because the conclusion seems too ideal.
There may be flaws in the assumptions of the model, or important characters may not be included. I may have jumped over some logic and brought in a spiritual argument. Therefore, I think there is a need to review this discussion more thoroughly.
And if, even after reviewing and refining the discussion, it still leads to the same conclusion, I might be witnessing a glimmer of hope.
If the conclusion was that without immediately stopping the antagonism, bending some patience or ideological beliefs for the sake of humanity and the entire mankind, and partly speculating on one’s own survival, we cannot deal with this problem, that would be despair. If we could understand this and easily join hands, we should have been able to solve many problems by now.
On the other hand, if the conclusion we have organized in this document is correct, we simply need to ask everyone to think rationally and choose with their own survival as the first priority. Of course, there will be people who suffer and hesitate in their choices due to biases. However, we can expect many people and many groups to make the right choices. This is a far more hopeful path than the situation hypothetically mentioned above.
And hope gives birth to power. The power to keep moving forward on the right path, no matter how difficult.
The Dilemma of Ultimate Technology
1. The Dilemma of Ultimate Technology
Similar to the Prisoner’s Dilemma in game theory, I will establish a thought model for the dilemma of ultimate technology.
1)There are two individuals, A and B.
2)A and B each have two options, and they don’t know which option the other has chosen:
Option 1: To proceed with the development of a certain technology.
Option 2: To stop the development of a certain technology.
3)Here are the known factors:
Rule a: If both choose Option 2 (to stop technological development), nothing happens.
Rule b: If both choose Option 1 (to proceed with technological development), their technological levels balance out and continue to elevate.
Rule c: If one chooses Option 1 and the other chooses Option 2, the one who chose Option 1 will have a higher technological level, upsetting the balance of power, and may eventually dominate the one who chose Option 2.
Rule d: If the technology level increases, there’s a possibility that at some point the technology will go out of control or become uncontrollable, potentially leading to the death of both parties. However, it might not go out of control. This probability is unknown, but according to a comprehensive analysis of expert opinions, it is estimated to be about 50%.
Moreover, let’s add the following settings:
Based on past experiences, A and B are known to have a contentious relationship with each other, and they are both aware of this fact.
On the other hand, from past experiences, A and B understand that neither of them prefers to recklessly risk their lives. This understanding applies to their own selves as well as their perception of the other. This is because they exist on the basis of mutually assured destruction—if one tries to take the other’s life, both lives may be lost.
Let’s also add the following settings:
A and B each have their respective allies, and A and B are their representatives.
If a technological outburst occurs, not only the representatives but also the allies will die.
If they are dominated by the opponent, not only the representatives but also the allies will be dominated.
It’s unknown to A and B how their allies will react. If the allies are dissatisfied with the choices of A and B, there’s a risk that they may lose their representative rights.
The allies know these rules. They don’t want to be dominated by the opponent, but they want to avoid the risk of death even more.
2. The Resolution to Speculate on Others
An important consideration when thinking about this complex dilemma of ultimate technology is whether one can focus on the fact that, if the primary goal is to avoid the risk of dying oneself, the choices of the other party are irrelevant.
With this focus, one realizes that, ultimately, one is required to be prepared to choose option 2.
The choices of the opponent and the intentions of one’s allies do have an impact on the outcome, but they actually do not have any bearing on one’s own decision. To prevent the death of oneself and one’s allies, there is no option but to choose option 2. One might be dominated by the opponent, or one might be stripped of one’s representative rights by one’s allies. However, the focus is on whether one can weigh this against the death of oneself and one’s allies.
In the end, if one cannot focus on this, one is at risk of losing everything. This is because this dilemma is not just about this one technology. It might be a dilemma that could arise in the future. Even if one technology, albeit with a 50% chance, allows one to avoid death, if many technologies emerge, continuing to choose option 1 will eventually lead to failure.
Therefore, if you are going to gamble, you have no choice but to bet on option 2 from the start. You can only hope that, having bet on option 2, the other party will also decide on option 2, and that both parties will not be stripped of their leadership roles by their allies. All other paths lead to a future where everyone eventually dies.
What this issue paints is a situation where, in one’s decisions, the best cannot be chosen, and one always ends up depending on the choices of others. And, if one cannot accept this reality and desires to determine one’s future with one’s own decisions, one’s future will certainly be jeopardized.
In this situation, the question is whether one can take what is called “speculation on others,” which is to entrust one’s fate to others, including one’s adversary with whom one is in conflict and one’s allies whose thoughts are unknown, instead of determining one’s own fate. Normally, speculation on others involves a matter of trust in those others, but the situation of the dilemma of ultimate technology leads to speculation on others rationally, given the understanding of the situation itself.
And, everyone involved in this situation, including the other party, one’s allies, and the other party’s allies, is forced into “speculation on others.”
This “speculation on others” is a matter of trust that others have the desire for self-preservation and can make rational decisions, irrespective of whether the parties involved have compassion for others or want to help their comrades.
Therefore, the dilemma of ultimate technology requires all parties involved to have the desire for self-preservation and the ability to make rational and long-term judgments.
This suggests that even if the disclosure of the existence of ultimate technology is hesitant due to concerns about causing temporary panic, there is no choice but to disclose it and encourage rational decision-making.
3. Extension of the Dilemma of Ultimate Technology
If we further expand the model of the dilemma of ultimate technology, the situation becomes clearer.
The initial situation involved two parties, A and B. We increase the number of groups of leaders and allies to C, D, E, and so on. This includes groups that are indirectly protected, as well as groups that are not involved at all, in addition to the direct parties to mutual assured destruction.
In this case, it may not be possible to expect all groups to make the same judgment when choosing to speculate on others.
Therefore, additional options become available. That is, alliances and exclusions.
You sort out the groups that are likely to have a desire for self-preservation and make rational decisions, and those that may not. At that time, it does not matter whether there is conflict. Even if there is conflict, you join hands with the entity that has a high likelihood of having a desire for self-preservation and making rational decisions, and form an alliance. Then, using the power of that alliance, you impose restrictions on other entities in the form of pressure, surveillance, and deprivation of resources for technology development and operation.
In order to avoid these restrictions, each entity has no choice but to join the alliance. Entities that refuse to join the alliance, or entities that cannot join, will be subject to strong restrictions.
4. What the Dilemma of Ultimate Technology Requires
Normally, it is hard to imagine hostile groups joining hands.
In fact, in the case of nuclear weapons, which are close to ultimate technology, the situation led to the East-West Cold War. However, nuclear weapons were easy to manage to prevent detonation, so the situation was different from that of ultimate technology.
Also, the exclusion of a part of the group is a debate that raises ethical questions from the perspective of humanity and basic human rights.
However, when the situation of annihilation by ultimate technology is envisaged, these “normal” concepts can be overturned.
As we have seen here, in the dilemma of ultimate technology, only groups that think rationally about their own survival can exist in the international community. This is exclusion, but on the other hand, it is simply requiring that they be groups with a very natural human desire. It does not require adherence to some common good or international agreement. Even if the culture, internal politics, and ideology are different, they can have the commonality of “thinking rationally about their own survival.”
And, to begin with, groups that cannot “think rationally about their own survival” are not groups that can survive long in a harsh international environment, regardless of the presence or absence of this dilemma. From this, such groups should hardly exist normally, and if they do exist, they are thought to be accidental and temporary occurrences.
Therefore, the dilemma of ultimate technology brings two things to the international community. The first is a value system that places the highest priority on “thinking rationally about one’s own survival.” This, in turn, translates to a value system that does not interfere with, intimidate, or attack the survival desires of other groups. This is the second thing brought to the international community. Until now, this value system had been a matter of international ethics and lip service, but the dilemma of ultimate technology creates a situation where one’s own survival is at risk if one cannot uphold this value system.
On that basis, the international community carries out two tasks based on the alliance formed by this new value system. One is monitoring, persuasion, and technology-related regulation of groups outside the alliance. The other is the establishment of an audit system for the state of technology development within the alliance. Since the groups within the alliance have decided to accept option 2, they maintain their innocence and build trust within the alliance by adhering to the rules and conducting audits regularly.
5. The Scene Depicted by the Dilemma of Ultimate Technology
This dilemma appears to be a new threat, a complex problem that makes one want to pull one’s hair out, and a situation that further complicates the international community.
However, as one organizes the dilemma of ultimate technology, one comes across three somewhat strange scenes.
The first is that it culminates in a way of thinking that, instead of urging one to strengthen ethics such as cooperation and trust with others, overall benefit, and self-sacrifice, encourages one to have the ultimately selfish goal of one’s own survival.
The second is that, instead of a strategy of confusing the opponent’s thoughts while concealing one’s own important information, it requires asking the opponent to think seriously, deeply, and from a long-term perspective, with the information open.
And the third is that, as long as the other party understands the two points above, even if the other aspects such as ideology, purpose, values, and culture are different, it is possible to join hands up to the point where this ultimate dilemma can be managed.
This is demanded of all entities, whether they are antagonistic entities, entities within the opposing group, or ordinary citizens within one’s own group.
Here, a scene different from the world of hegemony and power balance unfolds.
The new, complicated, complex, and hugely impactful problem of the dilemma of ultimate technology, which involves many stakeholders, oddly culminates in a simple and beautiful idea that strongly recommends respecting diversity and individual and individual cultures and thinking solidly with one’s own will.
Am I dreaming? Is it possible that the ideal society that I subconsciously envision is being projected onto this discussion? I thought I was just digging deep into the theory of the dilemma of ultimate technology based on sincere realism.
6. In Conclusion
I am somewhat skeptical of my own logic because the conclusion seems too ideal.
There may be flaws in the assumptions of the model, or important characters may not be included. I may have jumped over some logic and brought in a spiritual argument. Therefore, I think there is a need to review this discussion more thoroughly.
And if, even after reviewing and refining the discussion, it still leads to the same conclusion, I might be witnessing a glimmer of hope.
If the conclusion was that without immediately stopping the antagonism, bending some patience or ideological beliefs for the sake of humanity and the entire mankind, and partly speculating on one’s own survival, we cannot deal with this problem, that would be despair. If we could understand this and easily join hands, we should have been able to solve many problems by now.
On the other hand, if the conclusion we have organized in this document is correct, we simply need to ask everyone to think rationally and choose with their own survival as the first priority. Of course, there will be people who suffer and hesitate in their choices due to biases. However, we can expect many people and many groups to make the right choices. This is a far more hopeful path than the situation hypothetically mentioned above.
And hope gives birth to power. The power to keep moving forward on the right path, no matter how difficult.