I think this is wrong, and this intuition that many people have derives from a psychological mistake. Essentially everything in life has diminishing marginal utility, so it almost always makes sense to be risk averse. So it’s intuitive that you should be risk averse with respect to expected utility. But that doesn’t make any logical sense—by definition, you don’t have diminishing marginal utility of utility. Your utility function already accounts for risk aversion. Being risk averse with respect to utility is double-counting.
This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.
I don’t think this is quite what I’m referring to, but I can’t quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I’m not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don’t know if I’m embracing risk aversion views as much as relating to their appeal.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
In retrospect my comment was poorly thought-out, I think you’re right that it’s not directly addressing your scenarios.
I think there are two separate issues with my comment:
My comment was about being risk-averse with respect to utility; your quick take was about wanting to avoid causing harm; those aren’t necessarily the same thing.
You can self-consistently believe in diminishing marginal utility of welfare, i.e., your utility function isn’t just “utility = sum(welfare)”. And the way your quick take used the word “utility”, you really meant something more like “welfare” (it sounds like this is what you’re saying in your reply comment).
RE #1, my sense is that “person is risk-averse with respect to utility” is isomorphic to “person disprefers a lottery with a possibility of doing harm, even if it has the same expected utility as a purely-positive lottery”. Or like, I think the person is making the same mistake in these two scenarios. But it’s not immediately obvious that these are isomorphic and I’m not 100% sure it’s true. Now I kind of want to see if I can come up with a proof but I would need to take some time to dig into the problem.
RE #2, I do in fact believe that utility = welfare, but that’s a whole other discussion and it’s not what I was trying to get at with my original comment, which means I think my comment missed the mark.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
Depends on what you mean by “EV”. I do reject that conclusion if by EV you mean welfare. If by EV you mean something like “money”, then yeah I think money has diminishing marginal utility and you shouldn’t just maximized expected money.
I think this is wrong, and this intuition that many people have derives from a psychological mistake. Essentially everything in life has diminishing marginal utility, so it almost always makes sense to be risk averse. So it’s intuitive that you should be risk averse with respect to expected utility. But that doesn’t make any logical sense—by definition, you don’t have diminishing marginal utility of utility. Your utility function already accounts for risk aversion. Being risk averse with respect to utility is double-counting.
This is a valid statement but non-responsive to the actual post. The argument is that there is intuitive appeal in having a utility function with a discontinuity at zero (ie a jump in disutility from causing harm), and ~standard EV maximisation does not accommodate that intuition. That is a totally separate normative claim from arguing that we should encode diminishing marginal utility.
I don’t think this is quite what I’m referring to, but I can’t quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I’m not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don’t know if I’m embracing risk aversion views as much as relating to their appeal.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
In retrospect my comment was poorly thought-out, I think you’re right that it’s not directly addressing your scenarios.
I think there are two separate issues with my comment:
My comment was about being risk-averse with respect to utility; your quick take was about wanting to avoid causing harm; those aren’t necessarily the same thing.
You can self-consistently believe in diminishing marginal utility of welfare, i.e., your utility function isn’t just “utility = sum(welfare)”. And the way your quick take used the word “utility”, you really meant something more like “welfare” (it sounds like this is what you’re saying in your reply comment).
RE #1, my sense is that “person is risk-averse with respect to utility” is isomorphic to “person disprefers a lottery with a possibility of doing harm, even if it has the same expected utility as a purely-positive lottery”. Or like, I think the person is making the same mistake in these two scenarios. But it’s not immediately obvious that these are isomorphic and I’m not 100% sure it’s true. Now I kind of want to see if I can come up with a proof but I would need to take some time to dig into the problem.
RE #2, I do in fact believe that utility = welfare, but that’s a whole other discussion and it’s not what I was trying to get at with my original comment, which means I think my comment missed the mark.
Depends on what you mean by “EV”. I do reject that conclusion if by EV you mean welfare. If by EV you mean something like “money”, then yeah I think money has diminishing marginal utility and you shouldn’t just maximized expected money.