Executive summary: This exploratory essay argues that genuine sadistic preferences—where people intrinsically value others’ suffering—exist in humans, outlines possible evolutionary and sociological explanations for them, and suggests that better understanding sadism could inform both human welfare and risks from advanced AI.
Key points:
The author defines sadism as an intrinsic preference for others to suffer (sometimes context-dependent, e.g. “deserved suffering”), distinguishing it from merely sadistic-looking behaviors explained by other motives.
Evidence for genuine sadism includes self-reports endorsing enjoyment of causing harm, historical and modern torture practices, revenge-driven cruelty, everyday bullying or griefing, and spiteful acts like epilepsy trolling or revenge porn.
Evolutionary hypotheses suggest sadism may have been adaptive in retributive punishment, intimidation of enemies, dominance within groups, or as a drift from hunting-related aggression.
Sociological risk factors may include exposure to violence, reduced empathy (through trauma or unfairness), glorification of suffering, and dysfunctional family environments; studies link sadism to violent media, abuse histories, and traits from the Dark Tetrad.
Open research questions include refining typologies of sadism, understanding its neurological basis, and learning from clinical research on sadistic personality disorder.
Insights into the origins and mechanisms of sadism could help anticipate and mitigate the risk of similar harmful value formation in transformative AI systems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory essay argues that genuine sadistic preferences—where people intrinsically value others’ suffering—exist in humans, outlines possible evolutionary and sociological explanations for them, and suggests that better understanding sadism could inform both human welfare and risks from advanced AI.
Key points:
The author defines sadism as an intrinsic preference for others to suffer (sometimes context-dependent, e.g. “deserved suffering”), distinguishing it from merely sadistic-looking behaviors explained by other motives.
Evidence for genuine sadism includes self-reports endorsing enjoyment of causing harm, historical and modern torture practices, revenge-driven cruelty, everyday bullying or griefing, and spiteful acts like epilepsy trolling or revenge porn.
Evolutionary hypotheses suggest sadism may have been adaptive in retributive punishment, intimidation of enemies, dominance within groups, or as a drift from hunting-related aggression.
Sociological risk factors may include exposure to violence, reduced empathy (through trauma or unfairness), glorification of suffering, and dysfunctional family environments; studies link sadism to violent media, abuse histories, and traits from the Dark Tetrad.
Open research questions include refining typologies of sadism, understanding its neurological basis, and learning from clinical research on sadistic personality disorder.
Insights into the origins and mechanisms of sadism could help anticipate and mitigate the risk of similar harmful value formation in transformative AI systems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.