To my knowledge the most common rightness criterion of utilitarianism states that an action (or rule, or virtue) is good if, in expectation, it produces net positive value. Generally fraud of any kind does not have a net positive expected value, and it is very hard to distinguish the exceptions[1], if indeed any exist. Hence it is prudent to have a general rule against committing fraud, and I believe this aligns with what Richard is arguing in his post.
Personally I find it very dubious that fraud could ever be sanctioned by this criterion, especially once the damage to defrauded customers and reputational damage is factored in[2]. But let’s imagine, for the sake of discussion, that exceptions do exist and that they can be confidently identified[3]. This could be seen as a flaw of this kind of utilitarianism, e.g. if one has a very strong intuition against illegal actions like fraud[4]. Then one could appeal to other heuristics, such as risk-aversion (which is potentially more compatible with theories like objective utilitarianism) or moral uncertainty, which is my preferred response. I.e. there is a non-trivial possibility that theories like traditional deontology are true, which should also be factored into our decisions (e.g. by way of a moralparliament).
To summarise, I think in any realistic scenario, no reasonable type of utilitarianism will endorse fraud. But even if it somehow does, there are other adequate ways to handle this counter-intuitive conclusion which do not require abandoning utilitarianism altogether.
Edit: I just realised that maybe what you’re saying is more along the lines of “it doesn’t matter if the exceptions can be confidently identified or not, what matters is that they exist at all”. An obvious objection to this is that expected value is generally seen as relative to the agent in question, so it doesn’t really make sense to think of an action as having an ‘objective’ net positive EV.[5] Also, it’s not very relevant to the real-world, since ultimately it’s humans who are making the decisions based on imperfect information (at least at the moment).
And FWIW, I think this is a large part of the reason why a lot of people have such a strong intuition against fraud. It might not even be necessary to devise other explanations.
If an objective EV could be identified on the basis of perfect information and some small fundamental uncertainty, this would be much more like the actual value of the action than an EV, and leads to absurd conclusions. For instance, any minor everyday action could, through butterfly effects, lead to an extremely evil or extremely good person being born, and thus would have a very large ‘objective EV’, if defined this way.
In response to your edit: Yes, that’s what I mean. Utilitarianism can’t say that fraud is wrong as a matter of principle. The point about EV is not strictly relevant, since expected value theory != utilitarianism. One is a decision theory and the other is a metaethical framework. And my point does not concern what kinds of actions are rational. My point concerns what states of affairs are morally good on utilitarian grounds. The two notions can come apart (e.g. it might not be rational to play the lottery, but a state of affairs where one wins the lottery and donates the money to effective causes would be morally good on utilitarian grounds).
I’m guessing you mean ‘normative ethical framework’, not ‘meta-ethical framework’. That aside, what I was trying to say in my comment is that EV theory is not only a criterion for a rational decision, though it can be one,[1] but is often considered also a criterion for what is morally good on utilitarian grounds. See, for instance, this IEP page.
I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante. The former is somewhat of a minority view, to my knowledge, and is subject to serious criticisms. (Not least that it is impossible to know with certainty what the actual consequences of a given action will be.[2])[3]
That being said, I agree that the consequences ex post are still very relevant. Personally I find a ‘dual’ or ‘hybrid’ view like the one described here most plausible, which attempts to reconcile the two dichotomous views. Such a view does not entail that it is morally acceptable to commit an action which is, in reasonable expectation, net negative, it simply accepts that positive consequences could in fact result from this sort of action, despite our expectation, and that these consequences themselves would be good, and we would be glad about them. That does not mean that we should do the action in the first place, or be glad that it occurred.[4]
Actually, I don’t think that’s quite right either. The rationality criterion for decisions is expected utility theory, which is not necessarily the same as expected value in the context of consequentialism. The former is about the utility (or ‘value’) with respect to the individual, whereas the latter is about the value aggregated over all morally relevant individuals affected in a given scenario.
Also, in a scenario where someone reduced existential risk but extinction did in fact occur, objective utilitarianism would state that their actions were morally neutral / irrelevant. This is one of many possible examples that seem highly counterintuitive to me.
Also, if you were an objective consequentialist, it seems you would want to be more risk-averse and less inclined to use raw EV as your decision procedure anyway.
I am not intending to raise the question of ‘fitting attitudes’ with this language, but merely to describe my point about rightness in a more salient way.
I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante.
No. Here is what I mean. Utilitarianism defines moral value in terms of utility. So a state of affairs with high net utility is morally valuable, according to utilitarianism. And a state of affairs where SBF got away with it (and even some states of affairs where he didn’t) have net positive utility. So they are morally valuable, according to utilitarianism.
Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.
Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term ‘wealth’. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.
This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.
I’m aware of the term. I said that because utilitarianism is not a metaethical framework, so I’m not really sure what you are referring to. A metaethical framework would be something like moral naturalism or error theory.
Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.
Metaethics is about questions like what would make a moral statement true, or whether such statements can even be true. It is not about whether a ‘thing’ is morally good or bad: that is normative ethics. And again, I am talking about normative ethics, not decision theory. As I’ve tried to say, expected value is often used as a criterion of rightness, not only a decision procedure. That’s why the term ‘expectational’ or ‘expectable’ utilitarianism exists, which is described in various sources including the IEP. I have to say though at this point I am a little tired of restating that so many times without receiving a substantive response to it.
Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term ‘wealth’. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.
Yes, the rightness criterion is not necessarily identical to the decision procedure. But many utilitarians believe that actions should be morally judged on the basis of their reasonable EV, and it may turn out that this is in fact identical to the decision procedure (used or recommended). This does not mean it can’t be a rightness criterion. And let me reiterate here, I am talking about whether an action is good or bad, which is different to whether a world-state is good or bad. Utilitarianism can judge multiple types of things.
Also, as I’ve said before, if you in fact wanted to completely discard EV as a rightness criterion, then you would probably want to adjust your decision procedure as well, e.g. to be more risk-averse. The two tend to go hand in hand. I think a lot of the substance of the dilemma you’re presenting comes from rejecting a rightness criterion while maintaining the associated decision procedure, which doesn’t necessarily work well with other rightness criteria.
This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.
I agree with that. What I disagree with is whether that entails that the action that produced that state of affairs was also morally good. This seems to me very non-obvious. Let me give you an extreme example to stress the point:
Imagine a sadist pushes someone onto the road in front of traffic, just for fun (with the expectation that they’ll be hit). Fortunately the car that was going to hit them just barely stops soon enough. The driver of that car happens to be a terrorist who was (counterfactually) going to detonate a bomb in a crowded space later that day, but changes their mind because of the shocking experience (unbeknownst to the sadist). As a result, the terrorist is later arrested by the police before they can cause any harm. This is a major counterfactual improvement in the resulting state of affairs. However, it would seem absurd to me to say that it was therefore good, ex ante, to push the person into oncoming traffic.
Hmm perhaps. I did try to address your points quite directly in my last comment though (e.g. by arguing that EV can be both a decision procedure and a rightness criterion). Could you please explain how I’m talking past you?
To my knowledge the most common rightness criterion of utilitarianism states that an action (or rule, or virtue) is good if, in expectation, it produces net positive value. Generally fraud of any kind does not have a net positive expected value, and it is very hard to distinguish the exceptions[1], if indeed any exist. Hence it is prudent to have a general rule against committing fraud, and I believe this aligns with what Richard is arguing in his post.
Personally I find it very dubious that fraud could ever be sanctioned by this criterion, especially once the damage to defrauded customers and reputational damage is factored in[2]. But let’s imagine, for the sake of discussion, that exceptions do exist and that they can be confidently identified[3]. This could be seen as a flaw of this kind of utilitarianism, e.g. if one has a very strong intuition against illegal actions like fraud[4]. Then one could appeal to other heuristics, such as risk-aversion (which is potentially more compatible with theories like objective utilitarianism) or moral uncertainty, which is my preferred response. I.e. there is a non-trivial possibility that theories like traditional deontology are true, which should also be factored into our decisions (e.g. by way of a moral parliament).
To summarise, I think in any realistic scenario, no reasonable type of utilitarianism will endorse fraud. But even if it somehow does, there are other adequate ways to handle this counter-intuitive conclusion which do not require abandoning utilitarianism altogether.
Edit: I just realised that maybe what you’re saying is more along the lines of “it doesn’t matter if the exceptions can be confidently identified or not, what matters is that they exist at all”. An obvious objection to this is that expected value is generally seen as relative to the agent in question, so it doesn’t really make sense to think of an action as having an ‘objective’ net positive EV.[5] Also, it’s not very relevant to the real-world, since ultimately it’s humans who are making the decisions based on imperfect information (at least at the moment).
This is especially so given the prevalence of bias / motivated reasoning in human reasoning.
And FWIW, I think this is a large part of the reason why a lot of people have such a strong intuition against fraud. It might not even be necessary to devise other explanations.
Just to be clear, I don’t think the ongoing scenario was an exception of this kind.
Although it is easy to question this intuition, e.g. by imagining a situation where defrauding one person is necessary to save a million lives.
If an objective EV could be identified on the basis of perfect information and some small fundamental uncertainty, this would be much more like the actual value of the action than an EV, and leads to absurd conclusions. For instance, any minor everyday action could, through butterfly effects, lead to an extremely evil or extremely good person being born, and thus would have a very large ‘objective EV’, if defined this way.
In response to your edit: Yes, that’s what I mean. Utilitarianism can’t say that fraud is wrong as a matter of principle. The point about EV is not strictly relevant, since expected value theory != utilitarianism. One is a decision theory and the other is a metaethical framework. And my point does not concern what kinds of actions are rational. My point concerns what states of affairs are morally good on utilitarian grounds. The two notions can come apart (e.g. it might not be rational to play the lottery, but a state of affairs where one wins the lottery and donates the money to effective causes would be morally good on utilitarian grounds).
I’m guessing you mean ‘normative ethical framework’, not ‘meta-ethical framework’. That aside, what I was trying to say in my comment is that EV theory is not only a criterion for a rational decision, though it can be one,[1] but is often considered also a criterion for what is morally good on utilitarian grounds. See, for instance, this IEP page.
I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante. The former is somewhat of a minority view, to my knowledge, and is subject to serious criticisms. (Not least that it is impossible to know with certainty what the actual consequences of a given action will be.[2])[3]
That being said, I agree that the consequences ex post are still very relevant. Personally I find a ‘dual’ or ‘hybrid’ view like the one described here most plausible, which attempts to reconcile the two dichotomous views. Such a view does not entail that it is morally acceptable to commit an action which is, in reasonable expectation, net negative, it simply accepts that positive consequences could in fact result from this sort of action, despite our expectation, and that these consequences themselves would be good, and we would be glad about them. That does not mean that we should do the action in the first place, or be glad that it occurred.[4]
Actually, I don’t think that’s quite right either. The rationality criterion for decisions is expected utility theory, which is not necessarily the same as expected value in the context of consequentialism. The former is about the utility (or ‘value’) with respect to the individual, whereas the latter is about the value aggregated over all morally relevant individuals affected in a given scenario.
Also, in a scenario where someone reduced existential risk but extinction did in fact occur, objective utilitarianism would state that their actions were morally neutral / irrelevant. This is one of many possible examples that seem highly counterintuitive to me.
Also, if you were an objective consequentialist, it seems you would want to be more risk-averse and less inclined to use raw EV as your decision procedure anyway.
I am not intending to raise the question of ‘fitting attitudes’ with this language, but merely to describe my point about rightness in a more salient way.
No. I meant ‘metaethical framework.’ It is a standard term in moral philosophy. See: https://plato.stanford.edu/entries/metaethics/
No. Here is what I mean. Utilitarianism defines moral value in terms of utility. So a state of affairs with high net utility is morally valuable, according to utilitarianism. And a state of affairs where SBF got away with it (and even some states of affairs where he didn’t) have net positive utility. So they are morally valuable, according to utilitarianism.
Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.
Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term ‘wealth’. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.
This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.
I’m aware of the term. I said that because utilitarianism is not a metaethical framework, so I’m not really sure what you are referring to. A metaethical framework would be something like moral naturalism or error theory.
Metaethics is about questions like what would make a moral statement true, or whether such statements can even be true. It is not about whether a ‘thing’ is morally good or bad: that is normative ethics. And again, I am talking about normative ethics, not decision theory. As I’ve tried to say, expected value is often used as a criterion of rightness, not only a decision procedure. That’s why the term ‘expectational’ or ‘expectable’ utilitarianism exists, which is described in various sources including the IEP. I have to say though at this point I am a little tired of restating that so many times without receiving a substantive response to it.
Yes, the rightness criterion is not necessarily identical to the decision procedure. But many utilitarians believe that actions should be morally judged on the basis of their reasonable EV, and it may turn out that this is in fact identical to the decision procedure (used or recommended). This does not mean it can’t be a rightness criterion. And let me reiterate here, I am talking about whether an action is good or bad, which is different to whether a world-state is good or bad. Utilitarianism can judge multiple types of things.
Also, as I’ve said before, if you in fact wanted to completely discard EV as a rightness criterion, then you would probably want to adjust your decision procedure as well, e.g. to be more risk-averse. The two tend to go hand in hand. I think a lot of the substance of the dilemma you’re presenting comes from rejecting a rightness criterion while maintaining the associated decision procedure, which doesn’t necessarily work well with other rightness criteria.
I agree with that. What I disagree with is whether that entails that the action that produced that state of affairs was also morally good. This seems to me very non-obvious. Let me give you an extreme example to stress the point:
Imagine a sadist pushes someone onto the road in front of traffic, just for fun (with the expectation that they’ll be hit). Fortunately the car that was going to hit them just barely stops soon enough. The driver of that car happens to be a terrorist who was (counterfactually) going to detonate a bomb in a crowded space later that day, but changes their mind because of the shocking experience (unbeknownst to the sadist). As a result, the terrorist is later arrested by the police before they can cause any harm. This is a major counterfactual improvement in the resulting state of affairs. However, it would seem absurd to me to say that it was therefore good, ex ante, to push the person into oncoming traffic.
We are talking past one another.
Hmm perhaps. I did try to address your points quite directly in my last comment though (e.g. by arguing that EV can be both a decision procedure and a rightness criterion). Could you please explain how I’m talking past you?