TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren’t made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I’m assuming this is a legit question.)
I’m constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it’s overall good.
To restate and make sure that I understand where you’re coming from, I think that you’re framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under “General Ways of Responding to Objections to Utilitarianism”) if I was going to reword it, I would put it something like this:
“When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting.”
This is the second response commonly used in defense of the utilitarian framework “debunk the moral intuition” (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn’t the appropriate response (to this situation) because in this case, the moral intuition is correct.Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like “but if the stakes were even higher.”
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think “I’m the only one that can save everyone, all it takes is ‘insert thing that no one else wants me to do.’” stop what you’re doing and do what the people around you tell you to do.
If you think you’re Jesus, you’re probably not Jesus. (or in this case the Hulk.)
That’s why the discussions of corrupted hardware and the unilateralist’s curse (links provided by OP) are so important.
One must-read section says that “In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.”
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people’s understanding of utilitarianism is the single-level vs multi-level distinction.
TLDR because I got long-winded: If you ever find yourself planning to commit some morally horrible thing in the name of a good outcome, stop. Those kinds of choices aren’t made in the real world, they are a thought exercise (normally a really stupid one too.)
Long version:
Sorry that you got downvoted hard, keep in mind that knee-jerk reactions are probably pretty strong right now. While the disagrees are justified, the downvotes are probably not (I’m assuming this is a legit question.)
I’m constantly looking to learn more about ethics, philosophy, etc and I recently got introduced to this website: What is Utilitarianism? | Utilitarianism.net which I really liked. There are a few things that I disagree with or feel could have been more explored, but I think it’s overall good.
To restate and make sure that I understand where you’re coming from, I think that you’re framing the current objections like a trolley problem, or its more advanced version the transplant case. (Addressed in 8. Objections to Utilitarianism and Responses – Utilitarianism.net second paragraph under “General Ways of Responding to Objections to Utilitarianism”) if I was going to reword it, I would put it something like this:
“When considered in large enough situations, the ideal of precommitment would be swamped by the potential utility gains for defecting.”
This is the second response commonly used in defense of the utilitarian framework “debunk the moral intuition” (paragraph 5 in the same chapter and section.)
I believe, and I think most of us believe that this isn’t the appropriate response (to this situation) because in this case, the moral intuition is correct. Any misbehavior on this scale results in a weaker economic system, harms thousands if not millions of people, and erodes trust in society itself.
A response you might think would be something like “but if the stakes were even higher.”
And I agree, it would be pretty ridiculous if after the Avengers saved NYC from a chitauri invasion someone tried to sue the Hulk for using his car to crush an alien or something. We would all agree with you there, the illegal action (crushing a car) is justified by the alternative (aliens killing us all.)
The problem with that kind of scale, however, is that if you ever find yourself in a situation where you think “I’m the only one that can save everyone, all it takes is ‘insert thing that no one else wants me to do.’” stop what you’re doing and do what the people around you tell you to do.
If you think you’re Jesus, you’re probably not Jesus. (or in this case the Hulk.)
That’s why the discussions of corrupted hardware and the unilateralist’s curse (links provided by OP) are so important.
For more discussion on this you can look in Elements and Types of Utilitarianism – Utilitarianism.net “Multi-level Utilitarianism Versus Single-level Utilitarianism.”
One must-read section says that “In contrast, to our knowledge no one has ever defended single-level utilitarianism, including the classical utilitarians.26 Deliberately calculating the expected consequences of our actions is error-prone and risks falling into decision paralysis.”
I would encourage you to read that whole section (and the one that follows it if you think much of rule utilitarianism) as I think one of the most common problems with most people’s understanding of utilitarianism is the single-level vs multi-level distinction.