I agree this is a “problem” for utilitarianism (up-ticked accordingly).
But it’s also a “problem” for any system of ethics that takes expected value into account, which applies to nearly everyone. How many random people on the street would say, “No, ends never justify the means. Like in that movie, when the baddie asked that politician for the nuclear codes and she said she didn’t know them? She shouldn’t have lied.”
We’re all—utilitarians and non-utilitarians alike—just debating where that line is. I reckon utilitarians are generally more likely to accept ends justifying means, but not by much, given all the things the OP says and many people have said on this Forum and in EA literature and messaging.
Unless you’re a “naive utilitarian”, which is why we have that concept, though arguably in light of recent events, EA still doesn’t talk about it enough—I was very shocked at the thought that SBF could be this naive. (Although since hearing more details of the FTX story, I think it’s more likely that SBF wasn’t acting on utilitarian reasoning when he committed fraud—someone else puts it better than I could here: “My current guess is more that it’ll turn out Alameda made a Big Mistake in mid-2022. And instead of letting Alameda go under and keeping the valuable FTX exchange as a source of philanthropic money, there was an impulse to pride, to not lose the appearance of invincibility, to not be so embarrassed and humbled, to not swallow the loss, to proceed within the less painful illusion of normality and hope the reckoning could be put off forever. It’s a thing that happens a whole lot in finance! And not with utilitarians either! So we don’t need to attribute causality here to a cold utilitarian calculation that it was better to gamble literally everything, because to keep growing Alameda+FTX was so much more valuable to Earth than keeping FTX at a slower growth rate. It seems to me to appear on the surface of things, if the best-current-guess stories I’m hearing are true, that FTX blew up in a way classically associated with financial orgs being emotional and selfish, not with doing first-order naive utilitarian calculations.”)
I also love this quote from the same post for how it emphasises just how rare it is even on (non-naive) utilitarian grounds that serious rule-breaking is Good:
I worry that in the Peter Singer brand of altruism—that met with the Lesswrong Sequences which had a lot more Nozick libertarianism in it, and gave birth to effective altruism somewhere in the middle—there is too much Good and not enough Law, that it falls out of second-order rule utilitarianism into first-order subjective utilitarianism, and rule utilitarianism is what you ought to practice unless you are a god.
But it’s also a “problem” for any system of ethics that takes expected value into account, which applies to nearly everyone
Not all systems of ethics take expected value into account. Examples of views that do not take EV into account include virtue ethics, deontological views, and some forms of intuitionism.
Sorry, “any system of ethics” was unclear. I didn’t mean “any of the main normative theories that philosophers discuss” I meant “any way people have of making moral decisions in the real world.” I think that’s the relevant thing here and I think there are very few people in the world who are 100% deontologists or what have you (hence “which applies to nearly everyone” and my next sentence).
I think my attempt to give a general description here is failing, so let me take that bit out altogether and focus on the example and see if that makes my point clearer:
I agree this is a “problem” for utilitarianism (up-ticked accordingly).
But it’s also a “problem” for...nearly everyone. How many random people on the street would say, “No, ends never justify the means. Like in that movie, when the baddie asked that politician for the nuclear codes and she said she didn’t know them? She shouldn’t have lied.”
Yes but ends/means and EV are two distinct things. It is true that EV is technical apparatus we can use to make our ends/means reasoning more precise. But that does not mean that people who say ‘the ends never justify the means’ are saying that they subscribe to EV reasoning. It is perfectly consistent to agree with the former but disagree with the latter statement.
Virtue ethics is a good example of this, since it puts emphasis on moderation as well as intuition. So a virtue ethicist can consistently take a moderate, midway approach (where the ends sometimes justify the means) without accepting any particular theoretical framework (EV or otherwise) that describes these intuitions mathematically. Because it is logically possible that morality does not bottom out in mathematics.
Here’s an analogy. Perhaps morality is more like jazz music. You know good jazz when you see it, and bad jazz music is even more noticeable, but that doesn’t mean there is any axiomatic system that can tell us what it means for jazz music to be good. It is possible that something similar is true for morality. If so, then we need not accept EV theory, even if we believe that ends can justify means.
I know. My point is that people who really think “ends never justify the means” are very rare in the world.
(Incidentally, I think I’ve seen you insist and say elsewhere that utilitarianism is a metaethical theory and not a normative one, when it is in fact a normative one e.g first result.)
I agree this is a “problem” for utilitarianism (up-ticked accordingly).
But it’s also a “problem” for any system of ethics that takes expected value into account, which applies to nearly everyone. How many random people on the street would say, “No, ends never justify the means. Like in that movie, when the baddie asked that politician for the nuclear codes and she said she didn’t know them? She shouldn’t have lied.”
We’re all—utilitarians and non-utilitarians alike—just debating where that line is. I reckon utilitarians are generally more likely to accept ends justifying means, but not by much, given all the things the OP says and many people have said on this Forum and in EA literature and messaging.
Unless you’re a “naive utilitarian”, which is why we have that concept, though arguably in light of recent events, EA still doesn’t talk about it enough—I was very shocked at the thought that SBF could be this naive. (Although since hearing more details of the FTX story, I think it’s more likely that SBF wasn’t acting on utilitarian reasoning when he committed fraud—someone else puts it better than I could here: “My current guess is more that it’ll turn out Alameda made a Big Mistake in mid-2022. And instead of letting Alameda go under and keeping the valuable FTX exchange as a source of philanthropic money, there was an impulse to pride, to not lose the appearance of invincibility, to not be so embarrassed and humbled, to not swallow the loss, to proceed within the less painful illusion of normality and hope the reckoning could be put off forever. It’s a thing that happens a whole lot in finance! And not with utilitarians either! So we don’t need to attribute causality here to a cold utilitarian calculation that it was better to gamble literally everything, because to keep growing Alameda+FTX was so much more valuable to Earth than keeping FTX at a slower growth rate. It seems to me to appear on the surface of things, if the best-current-guess stories I’m hearing are true, that FTX blew up in a way classically associated with financial orgs being emotional and selfish, not with doing first-order naive utilitarian calculations.”)
I also love this quote from the same post for how it emphasises just how rare it is even on (non-naive) utilitarian grounds that serious rule-breaking is Good:
Not all systems of ethics take expected value into account. Examples of views that do not take EV into account include virtue ethics, deontological views, and some forms of intuitionism.
Sorry, “any system of ethics” was unclear. I didn’t mean “any of the main normative theories that philosophers discuss” I meant “any way people have of making moral decisions in the real world.” I think that’s the relevant thing here and I think there are very few people in the world who are 100% deontologists or what have you (hence “which applies to nearly everyone” and my next sentence).
In the real world, people don’t usually make EV calculations before making decisions, no? That seems very much so like an EA thing.
I think my attempt to give a general description here is failing, so let me take that bit out altogether and focus on the example and see if that makes my point clearer:
Yes but ends/means and EV are two distinct things. It is true that EV is technical apparatus we can use to make our ends/means reasoning more precise. But that does not mean that people who say ‘the ends never justify the means’ are saying that they subscribe to EV reasoning. It is perfectly consistent to agree with the former but disagree with the latter statement.
Virtue ethics is a good example of this, since it puts emphasis on moderation as well as intuition. So a virtue ethicist can consistently take a moderate, midway approach (where the ends sometimes justify the means) without accepting any particular theoretical framework (EV or otherwise) that describes these intuitions mathematically. Because it is logically possible that morality does not bottom out in mathematics.
Here’s an analogy. Perhaps morality is more like jazz music. You know good jazz when you see it, and bad jazz music is even more noticeable, but that doesn’t mean there is any axiomatic system that can tell us what it means for jazz music to be good. It is possible that something similar is true for morality. If so, then we need not accept EV theory, even if we believe that ends can justify means.
I know. My point is that people who really think “ends never justify the means” are very rare in the world.
(Incidentally, I think I’ve seen you insist and say elsewhere that utilitarianism is a metaethical theory and not a normative one, when it is in fact a normative one e.g first result.)