I don’t think this is quite what I’m referring to, but I can’t quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I’m not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don’t know if I’m embracing risk aversion views as much as relating to their appeal.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
In retrospect my comment was poorly thought-out, I think you’re right that it’s not directly addressing your scenarios.
I think there are two separate issues with my comment:
My comment was about being risk-averse with respect to utility; your quick take was about wanting to avoid causing harm; those aren’t necessarily the same thing.
You can self-consistently believe in diminishing marginal utility of welfare, i.e., your utility function isn’t just “utility = sum(welfare)”. And the way your quick take used the word “utility”, you really meant something more like “welfare” (it sounds like this is what you’re saying in your reply comment).
RE #1, my sense is that “person is risk-averse with respect to utility” is isomorphic to “person disprefers a lottery with a possibility of doing harm, even if it has the same expected utility as a purely-positive lottery”. Or like, I think the person is making the same mistake in these two scenarios. But it’s not immediately obvious that these are isomorphic and I’m not 100% sure it’s true. Now I kind of want to see if I can come up with a proof but I would need to take some time to dig into the problem.
RE #2, I do in fact believe that utility = welfare, but that’s a whole other discussion and it’s not what I was trying to get at with my original comment, which means I think my comment missed the mark.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
Depends on what you mean by “EV”. I do reject that conclusion if by EV you mean welfare. If by EV you mean something like “money”, then yeah I think money has diminishing marginal utility and you shouldn’t just maximized expected money.
I don’t think this is quite what I’m referring to, but I can’t quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I’m not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don’t know if I’m embracing risk aversion views as much as relating to their appeal.
Or maybe I’m misunderstanding, and you’re just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn’t care about that difference?
In retrospect my comment was poorly thought-out, I think you’re right that it’s not directly addressing your scenarios.
I think there are two separate issues with my comment:
My comment was about being risk-averse with respect to utility; your quick take was about wanting to avoid causing harm; those aren’t necessarily the same thing.
You can self-consistently believe in diminishing marginal utility of welfare, i.e., your utility function isn’t just “utility = sum(welfare)”. And the way your quick take used the word “utility”, you really meant something more like “welfare” (it sounds like this is what you’re saying in your reply comment).
RE #1, my sense is that “person is risk-averse with respect to utility” is isomorphic to “person disprefers a lottery with a possibility of doing harm, even if it has the same expected utility as a purely-positive lottery”. Or like, I think the person is making the same mistake in these two scenarios. But it’s not immediately obvious that these are isomorphic and I’m not 100% sure it’s true. Now I kind of want to see if I can come up with a proof but I would need to take some time to dig into the problem.
RE #2, I do in fact believe that utility = welfare, but that’s a whole other discussion and it’s not what I was trying to get at with my original comment, which means I think my comment missed the mark.
Depends on what you mean by “EV”. I do reject that conclusion if by EV you mean welfare. If by EV you mean something like “money”, then yeah I think money has diminishing marginal utility and you shouldn’t just maximized expected money.