Thanks for the question; I should have been more clear. By “groups” I mean small groups of people without specialized knowledge. In Pinker’s model, a cell of five malicious people working together isn’t much more dangerous than a single malicious person. Historically, people willing to sacrifice themselves to disrupt society haven’t been very common or competent, so threats on the level of “what a few untrained people can do” haven’t accounted for much damage, compared to threats from nations and from civilization itself.
This changes if a malicious person/small group has specialized experience (e.g. someone building a virus in their basement), but the lower the base rate of individual malice, the lower the chance that someone who gains this expertise will want to use it to hurt people, and the lower the chance that such people will find each other and form a group.
Examples of a few “categories” of entity that might be dangerous:
Unskilled individuals (e.g. Las Vegas shootings)
Unskilled small groups (e.g. extremist militias)
Skilled individuals (e.g. terrorist with a biology lab)
Skilled small groups (e.g. Anonymous/Wikileaks?) (I’d think that nearly all such groups would exist within governments or corporations, but maybe not)
Corporations with dangerous incentives (e.g. capabilities-focused AI companies)
Governments (e.g. the Manhattan Project, the North Korean military)
Societal incentives (e.g. carbon emissions, other varieties of Moloch)
If Pinker is right that very few people want to cause as much harm as possible, we’d worry less about malicious people, whether alone or together, and worry more about threats caused by people who don’t want to cause harm but have bad incentives, whether because of profit-seeking, patriotism, or other norms that aren’t utilitarian. At least, that’s my interpretation of the chapter.
I’ve been reading Phil Torres’s book on existential risks and I agree with him to the extent that people have been too dismissive about the amount of omnicidal agents or their capability to destroy the world. I think his reaction to Pinker would be that the level of competence needed to create disruption is decreasing because of technological development. Therefore, historical precedent is not a great guide. See: Who would destroy the world? Omnicidal agents and related phenomena
The capacity for small groups and even single individuals to wreak unprecedented havoc on civilization is growing as a result of dual-use emerging technologies. This means that scholars should be increasingly concerned about individuals who express omnicidal, mass genocidal, anti-civilizational, or apocalyptic beliefs/desires. The present article offers a comprehensive and systematic survey of actual individuals who have harbored a death wish for humanity or destruction wish for civilization. This paper thus provides a strong foundation for future research on “agential risks” and related issues. It could also serve as a helpful resource for counterterrorism experts and global risk scholars who wish to better understand our evolving threat environment.
I don’t know that I agree with Pinker; even if he’s right about the low base rate, ideas that reassure us about the limited impact of people with guns and poison may not extend to omnicidal attacks. I’m still much more worried about skilled groups of people working within corporations and governments, but I assume that our threat profile will shift more toward individuals over time.
Dear all, thanks for starting this thread, this is one of the most worrying problems that i have been pondering about for the past few years.
1. I believe that although empirically speaking, Pinker is probably right to say that individuals would be less likely to cause harm as much as possible to the world and that the logical conclusion would be that we focus more effort to counter malicious group. However, i believe that a unskilled single individual with the highest concentration of capacity (known to the world as we know it) has even more potential to have the intensity characteristic of the x-risk event that a group or a nation of individuals could be.
2. My own belief that the world is static in condition, and that violence will continue on a steady decline trend unless intervened with, as Pleasure is always harder to generate than pain and that people can eventually have the incentive to cause pain to others to generate pleasures (“utility”) to themselves.
My thoughts on the dilemma:
I think it’s always good to have a better estimate of the likelihood of the x-risk presented by individuals, but i wish to think that we should always have developed enough intensity to deal with the higher potential of x-risk event. I.e, if the nuclear switches (when triggered), will cause an x-risk event, will we have developed enough intensity (advanced technology or preventive measures) to stop that occurrence then?
Thank you all very much, it’s been a highly pleasurable and very thoughtful read