I downvoted and want to explain my reasoning briefly: the conclusions presented are too strong, and the justifications donāt necessarily support them.
We simply donāt have enough experience or data points to say what the ācentral problemā in a utilitarian community will be. The one study cited seems suggestive at best. People on the spectrum are, well, on a spectrum, and so is their behavior; how they react will not be as monolithic as suggested.
All that being said, I softly agree with the conclusion (because I think this would be true for any community).
All of this suggests that, as you recommend, in communities with lots of consequentialists, there needs to be very large emphasis on virtues and common sense norms.
Iām not sure I agree with the conclusion, because people with dark triad personalities may be better than average at virtue signalling and demonstrating adherence to norms.
I think there should probably be a focus on principles, standards and rules that can be easily recalled by a person in a chaotic situation (e.g. put on your mask before helping others). And that these should be designed with limiting downside risk and risk of ruin in mind.
My intuition is that the rule ādisfavour people who show signs of being low integrityā is a bad one, as:
it relies on ability to compare person to idealised person rather than behaviour to rule, and the former is much more difficult to reason about
itās moving the problem elsewhere not solving it
itās likely to reduce diversity and upside potential of the community
it doesnāt mitigate the risk when a bad actor passes the filter
Iād favour starting from the premise that everyone has the potential to act without integrity and trying to design systems than mitigate this risk.
What systems could be designed to mitigate risks of people taking advantage of others? What about spreading the knowledge of how we are influenced? With this knowledge, we can recognize these behaviors and turn off our auto pilots so we can defend ourselves from bad actors. Or will that knowledge being widespread lead to some people using this knowledge to do more damage?
I downvoted and want to explain my reasoning briefly: the conclusions presented are too strong, and the justifications donāt necessarily support them.
We simply donāt have enough experience or data points to say what the ācentral problemā in a utilitarian community will be. The one study cited seems suggestive at best. People on the spectrum are, well, on a spectrum, and so is their behavior; how they react will not be as monolithic as suggested.
All that being said, I softly agree with the conclusion (because I think this would be true for any community).
Iām not sure I agree with the conclusion, because people with dark triad personalities may be better than average at virtue signalling and demonstrating adherence to norms.
I think there should probably be a focus on principles, standards and rules that can be easily recalled by a person in a chaotic situation (e.g. put on your mask before helping others). And that these should be designed with limiting downside risk and risk of ruin in mind.
My intuition is that the rule ādisfavour people who show signs of being low integrityā is a bad one, as:
it relies on ability to compare person to idealised person rather than behaviour to rule, and the former is much more difficult to reason about
itās moving the problem elsewhere not solving it
itās likely to reduce diversity and upside potential of the community
it doesnāt mitigate the risk when a bad actor passes the filter
Iād favour starting from the premise that everyone has the potential to act without integrity and trying to design systems than mitigate this risk.
What systems could be designed to mitigate risks of people taking advantage of others? What about spreading the knowledge of how we are influenced? With this knowledge, we can recognize these behaviors and turn off our auto pilots so we can defend ourselves from bad actors. Or will that knowledge being widespread lead to some people using this knowledge to do more damage?