Thanks for the remarks concerning Hitler and Stalin.
I think it might be quite valuable, for the project as a whole, to better understand why people are drawn to leaders with features they would not tolerate in peers, such as dark traits.
For one, it’s very plausible that, as you mentioned, the explanation is (a) dark traits are very useful – these individuals are (almost) the only ones with incentives enough to take risks, get things done, innovate, etc. Particularly, if we do need things like strategies of mutually assured destruction, then we need someone credibly capable of “playing hawk”, and it’s arguably hard to believe nice people would do that. This hypothesis really lowers my credence in us decreasing x-risks by screening for dark traits; malevolent people would be analogous to nukes, and it’s hard to unilaterally get rid of them.
A competing explanation is that (b) they’re not that useful, they’re parasitical. Dark traits are uncorrelated with achievement, they just make someone better at outcompeting useful pro-social people, by, e.g., occupying their corresponding niches, or getting more publicity—and so making people think (due to representative bias) that bad guys are more useful than they are. That’s plausible, too; for instance, almost no one outside EA and LW communities knows about Arkhipov and Petrov. If that’s the case, then a group could indeed unilaterally benefit from getting rid of malevolent / dark trait leaders.
(Maybe I should make clear that I don’t have anything against dark triad traits individuals per se, and I’m as afraid of the possibility of witch hunts and other abuses as everyone else. And even if dark traits were uncorrelated with capacity for achievement, a group might deprise itself from a scarce resource by selecting against very useful dark trait individuals, like scientists and entrepreneurs)
Thanks for the remarks concerning Hitler and Stalin.
I think it might be quite valuable, for the project as a whole, to better understand why people are drawn to leaders with features they would not tolerate in peers, such as dark traits.
For one, it’s very plausible that, as you mentioned, the explanation is (a) dark traits are very useful – these individuals are (almost) the only ones with incentives enough to take risks, get things done, innovate, etc. Particularly, if we do need things like strategies of mutually assured destruction, then we need someone credibly capable of “playing hawk”, and it’s arguably hard to believe nice people would do that. This hypothesis really lowers my credence in us decreasing x-risks by screening for dark traits; malevolent people would be analogous to nukes, and it’s hard to unilaterally get rid of them.
A competing explanation is that (b) they’re not that useful, they’re parasitical. Dark traits are uncorrelated with achievement, they just make someone better at outcompeting useful pro-social people, by, e.g., occupying their corresponding niches, or getting more publicity—and so making people think (due to representative bias) that bad guys are more useful than they are. That’s plausible, too; for instance, almost no one outside EA and LW communities knows about Arkhipov and Petrov. If that’s the case, then a group could indeed unilaterally benefit from getting rid of malevolent / dark trait leaders.
(Maybe I should make clear that I don’t have anything against dark triad traits individuals per se, and I’m as afraid of the possibility of witch hunts and other abuses as everyone else. And even if dark traits were uncorrelated with capacity for achievement, a group might deprise itself from a scarce resource by selecting against very useful dark trait individuals, like scientists and entrepreneurs)