Could you describe your intuitions? ‘valuing {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}’ is really unintuitive to me.
To me, risk aversion is just a way of hedging your bets about the upsides and downsides of your decision. It doesn’t make sense to me to apply risk aversion to objects that feature no risk (background facts about the world, like its size). It has nothing to do with whether we value the size of the world. It’s just that those background facts are certain, and von Neumann-Morgenstern utility functions like we are using are really designed to deal with uncertainty.
Another way to put it is that concave utility functions just mean something very different when applied to certain situations vs uncertain situations.
In the presence of certainty, saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives. That seems to be the position you are describing. I don’t resonate with that, because I think additional lives have constant value to me (if everything is certain).
But in the presence of uncertainty, saying that you have a concave utility function just means that you don’t like high-variance outcomes. That is the position I am taking. I don’t want to be screwed by tail outcomes. I want to hedge against them. If there were zero uncertainty, I would behave like my utility function was linear, but there is uncertainty, so I don’t.
I introduced this topic and wrote more about it in this shortform. I wanted to give the topic its own thread and see if others might have responses.
I don’t want to be screwed by tail outcomes. I want to hedge against them.
I do this too, but even despite the worlds size making my choices mostly only effecting value on the linear parts of my value function! Because tail outcomes are often large. (Maybe I mean something like kelly-betting/risk-aversion is often useful to fulfill instrumental subgoals too).
(Edit: and I think ‘correctly accounting for tail outcomes’ is just the correct way to deal with them).
saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives
Yes, though it’s not because additional lives are less intrinsically valuable, but because I have other values which are non-quantitative (narrative) and almost maxxed out way before there are very large numbers of lives.
A different way to say it would be that I value multiple things, but many of them don’t scale indefinitely with lives, so the overall function goes up faster at the start of the lives graph.
I suppose that is a coherent worldview but I don’t share any of the intuitions that lead you to it.
Could you describe your intuitions? ‘valuing {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}’ is really unintuitive to me.
To me, risk aversion is just a way of hedging your bets about the upsides and downsides of your decision. It doesn’t make sense to me to apply risk aversion to objects that feature no risk (background facts about the world, like its size). It has nothing to do with whether we value the size of the world. It’s just that those background facts are certain, and von Neumann-Morgenstern utility functions like we are using are really designed to deal with uncertainty.
Another way to put it is that concave utility functions just mean something very different when applied to certain situations vs uncertain situations.
In the presence of certainty, saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives. That seems to be the position you are describing. I don’t resonate with that, because I think additional lives have constant value to me (if everything is certain).
But in the presence of uncertainty, saying that you have a concave utility function just means that you don’t like high-variance outcomes. That is the position I am taking. I don’t want to be screwed by tail outcomes. I want to hedge against them. If there were zero uncertainty, I would behave like my utility function was linear, but there is uncertainty, so I don’t.
This is so interesting to me.
I introduced this topic and wrote more about it in this shortform. I wanted to give the topic its own thread and see if others might have responses.
I do this too, but even despite the worlds size making my choices mostly only effecting value on the linear parts of my value function! Because tail outcomes are often large. (Maybe I mean something like kelly-betting/risk-aversion is often useful to fulfill instrumental subgoals too).
(Edit: and I think ‘correctly accounting for tail outcomes’ is just the correct way to deal with them).
Yes, though it’s not because additional lives are less intrinsically valuable, but because I have other values which are non-quantitative (narrative) and almost maxxed out way before there are very large numbers of lives.
A different way to say it would be that I value multiple things, but many of them don’t scale indefinitely with lives, so the overall function goes up faster at the start of the lives graph.