Gunman: [points a sniper rifle at a faraway kid] Give me $10 or I’ll kill this kid.
Utilitarian: I’m sorry, why should I believe that you will let the kid live if I give you $10? Also, I can’t give you the money because that would set a bad precedent. If people know I always give money to gunmen that would encourage people to start taking hostages and demanding money from me.
Gunman: I promise I will let her live and to keep it a secret. See, I have this bomb-collar that will explode if I try to remove it. Here’s a detonator that starts working in 1 hour, now you can punish me if I break my promise.
Utilitarian: How do I know you won’t come back tomorrow to threaten another kid?
Gunman: I’m collecting money for a (non-effective) charity. I only do this because threatening utilitarians is an easy source of money. I promise I’ll only threaten you once.
Utilitarian: So you’re saying the mere existence of utilitarians can generate cruel behavior in people who otherwise wouldn’t? Guess I should consider not being a utilitarian, or at least keeping it a secret.
EDIT: Will someone explain why this is worth (strong) downvotes? This seems like a pretty natural extension of game theory; If you reveal you’re always open to sacrificing personal utility for others you leave yourself more open to exploitation than with a tit-for-tat-like strategy (e.g. contractualism), meaning people are more likely to try and exploit you (e.g. by threatening nuclear war). If you think I made a mistake in my reasoning why leave me with less voting power and why not click on the disagreement vote or leave a comment explaining it?
EDIT 2: DC’s hypothesis that it’s because of vibes and not reasoning is interesting, although I find the hypothesis that some EA’s strongly identify as utilitarian and don’t like seeing it questioned also plausible (they don’t seem to have a problem with a pro-utilitarianism argument having a child in mortal peril, e.g. the drowning child experiment). There’s a reason thought experiments in ethics often have these attributes; I’m not trying to disturb, I’m trying to succinctly show the structure of threats without wasting the readers time with fluff. So for example, I choose a child so I don’t need to specify a high amount of expected DALY’s per dollar, I choose a sniper rifle because then there doesn’t need to be a mechanism to make the child also keep the agreement a secret, I choose a bomb-collar because that’s a quick way to establish a credible punishment mechanism, etc etc.
People were probably just squicked by the shocking gunman example starting the first sentence with no context and auto-downvoted based on vibes, rather than your reasoning. You optimized pretty hard for violent shock value with your first sentence, which could be a good hook for a short story in other contexts but here hijacks the altruistic reader with ambiguously threatening information. I don’t personally mind but maybe it’s triggering for some. Try using less violent hypotheticals or more realistic ones, maybe
Gunman: [points a sniper rifle at a faraway kid] Give me $10 or I’ll kill this kid.
Utilitarian: I’m sorry, why should I believe that you will let the kid live if I give you $10? Also, I can’t give you the money because that would set a bad precedent. If people know I always give money to gunmen that would encourage people to start taking hostages and demanding money from me.
Gunman: I promise I will let her live and to keep it a secret. See, I have this bomb-collar that will explode if I try to remove it. Here’s a detonator that starts working in 1 hour, now you can punish me if I break my promise.
Utilitarian: How do I know you won’t come back tomorrow to threaten another kid?
Gunman: I’m collecting money for a (non-effective) charity. I only do this because threatening utilitarians is an easy source of money. I promise I’ll only threaten you once.
Utilitarian: So you’re saying the mere existence of utilitarians can generate cruel behavior in people who otherwise wouldn’t? Guess I should consider not being a utilitarian, or at least keeping it a secret.
EDIT: Will someone explain why this is worth (strong) downvotes? This seems like a pretty natural extension of game theory; If you reveal you’re always open to sacrificing personal utility for others you leave yourself more open to exploitation than with a tit-for-tat-like strategy (e.g. contractualism), meaning people are more likely to try and exploit you (e.g. by threatening nuclear war). If you think I made a mistake in my reasoning why leave me with less voting power and why not click on the disagreement vote or leave a comment explaining it?
EDIT 2: DC’s hypothesis that it’s because of vibes and not reasoning is interesting, although I find the hypothesis that some EA’s strongly identify as utilitarian and don’t like seeing it questioned also plausible (they don’t seem to have a problem with a pro-utilitarianism argument having a child in mortal peril, e.g. the drowning child experiment). There’s a reason thought experiments in ethics often have these attributes; I’m not trying to disturb, I’m trying to succinctly show the structure of threats without wasting the readers time with fluff. So for example, I choose a child so I don’t need to specify a high amount of expected DALY’s per dollar, I choose a sniper rifle because then there doesn’t need to be a mechanism to make the child also keep the agreement a secret, I choose a bomb-collar because that’s a quick way to establish a credible punishment mechanism, etc etc.
People were probably just squicked by the shocking gunman example starting the first sentence with no context and auto-downvoted based on vibes, rather than your reasoning. You optimized pretty hard for violent shock value with your first sentence, which could be a good hook for a short story in other contexts but here hijacks the altruistic reader with ambiguously threatening information. I don’t personally mind but maybe it’s triggering for some. Try using less violent hypotheticals or more realistic ones, maybe