I’m not sure I agree with the conclusion, because people with dark triad personalities may be better than average at virtue signalling and demonstrating adherence to norms.
I think there should probably be a focus on principles, standards and rules that can be easily recalled by a person in a chaotic situation (e.g. put on your mask before helping others). And that these should be designed with limiting downside risk and risk of ruin in mind.
My intuition is that the rule “disfavour people who show signs of being low integrity” is a bad one, as:
it relies on ability to compare person to idealised person rather than behaviour to rule, and the former is much more difficult to reason about
it’s moving the problem elsewhere not solving it
it’s likely to reduce diversity and upside potential of the community
it doesn’t mitigate the risk when a bad actor passes the filter
I’d favour starting from the premise that everyone has the potential to act without integrity and trying to design systems than mitigate this risk.
What systems could be designed to mitigate risks of people taking advantage of others? What about spreading the knowledge of how we are influenced? With this knowledge, we can recognize these behaviors and turn off our auto pilots so we can defend ourselves from bad actors. Or will that knowledge being widespread lead to some people using this knowledge to do more damage?
I’m not sure I agree with the conclusion, because people with dark triad personalities may be better than average at virtue signalling and demonstrating adherence to norms.
I think there should probably be a focus on principles, standards and rules that can be easily recalled by a person in a chaotic situation (e.g. put on your mask before helping others). And that these should be designed with limiting downside risk and risk of ruin in mind.
My intuition is that the rule “disfavour people who show signs of being low integrity” is a bad one, as:
it relies on ability to compare person to idealised person rather than behaviour to rule, and the former is much more difficult to reason about
it’s moving the problem elsewhere not solving it
it’s likely to reduce diversity and upside potential of the community
it doesn’t mitigate the risk when a bad actor passes the filter
I’d favour starting from the premise that everyone has the potential to act without integrity and trying to design systems than mitigate this risk.
What systems could be designed to mitigate risks of people taking advantage of others? What about spreading the knowledge of how we are influenced? With this knowledge, we can recognize these behaviors and turn off our auto pilots so we can defend ourselves from bad actors. Or will that knowledge being widespread lead to some people using this knowledge to do more damage?