Thanks for the question; I should have been more clear. By āgroupsā I mean small groups of people without specialized knowledge. In Pinkerās model, a cell of five malicious people working together isnāt much more dangerous than a single malicious person. Historically, people willing to sacrifice themselves to disrupt society havenāt been very common or competent, so threats on the level of āwhat a few untrained people can doā havenāt accounted for much damage, compared to threats from nations and from civilization itself.
This changes if a malicious person/āsmall group has specialized experience (e.g. someone building a virus in their basement), but the lower the base rate of individual malice, the lower the chance that someone who gains this expertise will want to use it to hurt people, and the lower the chance that such people will find each other and form a group.
Examples of a few ācategoriesā of entity that might be dangerous:
Unskilled individuals (e.g. Las Vegas shootings)
Unskilled small groups (e.g. extremist militias)
Skilled individuals (e.g. terrorist with a biology lab)
Skilled small groups (e.g. Anonymous/āWikileaks?) (Iād think that nearly all such groups would exist within governments or corporations, but maybe not)
Corporations with dangerous incentives (e.g. capabilities-focused AI companies)
Governments (e.g. the Manhattan Project, the North Korean military)
Societal incentives (e.g. carbon emissions, other varieties of Moloch)
If Pinker is right that very few people want to cause as much harm as possible, weād worry less about malicious people, whether alone or together, and worry more about threats caused by people who donāt want to cause harm but have bad incentives, whether because of profit-seeking, patriotism, or other norms that arenāt utilitarian. At least, thatās my interpretation of the chapter.
Iāve been reading Phil Torresās book on existential risks and I agree with him to the extent that people have been too dismissive about the amount of omnicidal agents or their capability to destroy the world. I think his reaction to Pinker would be that the level of competence needed to create disruption is decreasing because of technological development. Therefore, historical precedent is not a great guide. See: Who would destroy the world? Omnicidal agents and related phenomena
Abstract:
The capacity for small groups and even single individuals to wreak unprecedented havoc on civilization is growing as a result of dual-use emerging technologies. This means that scholars should be increasingly concerned about individuals who express omnicidal, mass genocidal, anti-civilizational, or apocalyptic beliefs/ādesires. The present article offers a comprehensive and systematic survey of actual individuals who have harbored a death wish for humanity or destruction wish for civilization. This paper thus provides a strong foundation for future research on āagential risksā and related issues. It could also serve as a helpful resource for counterterrorism experts and global risk scholars who wish to better understand our evolving threat environment.
I donāt know that I agree with Pinker; even if heās right about the low base rate, ideas that reassure us about the limited impact of people with guns and poison may not extend to omnicidal attacks. Iām still much more worried about skilled groups of people working within corporations and governments, but I assume that our threat profile will shift more toward individuals over time.
This also seems reminiscent of Bostromās Vulnerable World Hypothesis (published a year after this thread, so fair enough that it didnāt make an appearance here :D). The abstract:
Scientific and technological progress might change peopleās capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the āsemi-anarchic default conditionā. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.
The most relevant part is Bostromās āeasy nukesā thought experiment.
Dear all, thanks for starting this thread, this is one of the most worrying problems that i have been pondering about for the past few years.
1. I believe that although empirically speaking, Pinker is probably right to say that individuals would be less likely to cause harm as much as possible to the world and that the logical conclusion would be that we focus more effort to counter malicious group. However, i believe that a unskilled single individual with the highest concentration of capacity (known to the world as we know it) has even more potential to have the intensity characteristic of the x-risk event that a group or a nation of individuals could be.
2. My own belief that the world is static in condition, and that violence will continue on a steady decline trend unless intervened with, as Pleasure is always harder to generate than pain and that people can eventually have the incentive to cause pain to others to generate pleasures (āutilityā) to themselves.
My thoughts on the dilemma:
I think itās always good to have a better estimate of the likelihood of the x-risk presented by individuals, but i wish to think that we should always have developed enough intensity to deal with the higher potential of x-risk event. I.e, if the nuclear switches (when triggered), will cause an x-risk event, will we have developed enough intensity (advanced technology or preventive measures) to stop that occurrence then?
Thank you all very much, itās been a highly pleasurable and very thoughtful read
Thanks for the question; I should have been more clear. By āgroupsā I mean small groups of people without specialized knowledge. In Pinkerās model, a cell of five malicious people working together isnāt much more dangerous than a single malicious person. Historically, people willing to sacrifice themselves to disrupt society havenāt been very common or competent, so threats on the level of āwhat a few untrained people can doā havenāt accounted for much damage, compared to threats from nations and from civilization itself.
This changes if a malicious person/āsmall group has specialized experience (e.g. someone building a virus in their basement), but the lower the base rate of individual malice, the lower the chance that someone who gains this expertise will want to use it to hurt people, and the lower the chance that such people will find each other and form a group.
Examples of a few ācategoriesā of entity that might be dangerous:
Unskilled individuals (e.g. Las Vegas shootings)
Unskilled small groups (e.g. extremist militias)
Skilled individuals (e.g. terrorist with a biology lab)
Skilled small groups (e.g. Anonymous/āWikileaks?) (Iād think that nearly all such groups would exist within governments or corporations, but maybe not)
Corporations with dangerous incentives (e.g. capabilities-focused AI companies)
Governments (e.g. the Manhattan Project, the North Korean military)
Societal incentives (e.g. carbon emissions, other varieties of Moloch)
If Pinker is right that very few people want to cause as much harm as possible, weād worry less about malicious people, whether alone or together, and worry more about threats caused by people who donāt want to cause harm but have bad incentives, whether because of profit-seeking, patriotism, or other norms that arenāt utilitarian. At least, thatās my interpretation of the chapter.
Iāve been reading Phil Torresās book on existential risks and I agree with him to the extent that people have been too dismissive about the amount of omnicidal agents or their capability to destroy the world. I think his reaction to Pinker would be that the level of competence needed to create disruption is decreasing because of technological development. Therefore, historical precedent is not a great guide. See: Who would destroy the world? Omnicidal agents and related phenomena
Abstract:
I donāt know that I agree with Pinker; even if heās right about the low base rate, ideas that reassure us about the limited impact of people with guns and poison may not extend to omnicidal attacks. Iām still much more worried about skilled groups of people working within corporations and governments, but I assume that our threat profile will shift more toward individuals over time.
This also seems reminiscent of Bostromās Vulnerable World Hypothesis (published a year after this thread, so fair enough that it didnāt make an appearance here :D). The abstract:
The most relevant part is Bostromās āeasy nukesā thought experiment.
Dear all, thanks for starting this thread, this is one of the most worrying problems that i have been pondering about for the past few years.
1. I believe that although empirically speaking, Pinker is probably right to say that individuals would be less likely to cause harm as much as possible to the world and that the logical conclusion would be that we focus more effort to counter malicious group. However, i believe that a unskilled single individual with the highest concentration of capacity (known to the world as we know it) has even more potential to have the intensity characteristic of the x-risk event that a group or a nation of individuals could be.
2. My own belief that the world is static in condition, and that violence will continue on a steady decline trend unless intervened with, as Pleasure is always harder to generate than pain and that people can eventually have the incentive to cause pain to others to generate pleasures (āutilityā) to themselves.
My thoughts on the dilemma:
I think itās always good to have a better estimate of the likelihood of the x-risk presented by individuals, but i wish to think that we should always have developed enough intensity to deal with the higher potential of x-risk event. I.e, if the nuclear switches (when triggered), will cause an x-risk event, will we have developed enough intensity (advanced technology or preventive measures) to stop that occurrence then?
Thank you all very much, itās been a highly pleasurable and very thoughtful read
Wei Lun