I agree that uncertainty alone doesn’t warrant separate treatment, and risk aversion is key.
(Before I get into the formal stuff, risk aversion to me just means placing a premium on hedging. I say this in advance because conversations about risk aversion vs risk neutrality tend to devolve into out-there comparisons like the St Petersburg paradox, and that’s never struck me as a particularly resonant way to think about it. I am risk averse for the same reason that most people are: it just feels important to hedge your bets.)
By risk aversion I mean a utility function that satisfies u(E[X])>E[u(X)]. Notably, that means that you can’t just take the expected value of lives saved across worlds when evaluating a decision – the distribution of how those lives are saved across worlds matters. I describe that more here.
For example, say my utility function over lives saved x is u(x)=√x. You offer me a choice between a charity that has a 10% chance to save 100 lives, and a charity that saves 5 lives with certainty. The utility of the former option to me is u(x)=0.1⋅√100=1, while the utility of the latter option is u(x)=1⋅√5. Thus, I choose the latter, even though it has lower expected lives saved (E[x]=0.1⋅100=10 for the former, E[x]=5 for the latter). What’s going on is that I am valuing certain impact over higher expected lives saved.
Apply this to the meat eater problem, where we have the choices
spend $10 on animal charities
spend $10 on development charities
spend $5 on each of them
If you’re risk neutral, 1) or 2) are the way to go – pick animals if your best bet is that animals are worth more (accounting for efficacy, room for funding, etc etc), and pick development if your best bet is that humans are worth more. But both options leave open the possibility that you are terribly wrong and you’ve wasted $10 or caused harm. Option 3) guarantees that you’ve created some positive value, regardless of whether animals or humans are worth more. If you’re risk-averse, that certain positive value is worth more than a higher expected value.
It sounds like we agree about what risk aversion is! The term I use that includes your example of valuing the square root of lives saved is a ‘concave utility function’. I have one of these, sort of; it goes up quickly for the first x lives (I’m not sure how large x is exactly), then becomes more linear.
But it’s unexpected to me for other EAs to value {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}. I tried to indicate in my comment that I think this might be the crux, given the size of the world.
(In your example of valuing the square root of lives saved (or lives per se), if there’s 1,000 good lives already, then preventing 16 deaths has a utility of 4 under the former, and √1000−√984 under the latter; and preventing 64 is twice as valuable under the former, but ~4x as valuable under the latter)
But it’s unexpected to me for other EAs to value {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}.
Your parenthetical clarifies that you just find it weird because you could add a constant inside the concave function and change the relative value of outcomes. I just don’t see any reason to do that? Why does the size of the world net of your decision determine the optimal decision?
The parenthetical isn’t why it’s unexpected, but clarifying how it’s actually different.
As an attempt at building intuition for why it matters, consider if an agent applied the ‘square of lives saved by me’ function newly to each action instead of keeping track of how many lives they’ve saved over their existence. Then this agent would gain more utility by taking four separate actions, each of which certainly save 1 life (for 1 utility each), than from one lone action that certainly saves 15 lives (for 3.87 utility). Then generalize this example to the case where they do keep track, and progress just ‘resets’ for new clones of them. Or the real-world case where there’s multiple agents with similar values.
Why does the size of the world net of your decision determine the optimal decision?
I describe this starting from 6 paragraphs up in my edited long comment. I’m not sure if you read it pre- or post-edit.
Could you describe your intuitions? ‘valuing {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}’ is really unintuitive to me.
To me, risk aversion is just a way of hedging your bets about the upsides and downsides of your decision. It doesn’t make sense to me to apply risk aversion to objects that feature no risk (background facts about the world, like its size). It has nothing to do with whether we value the size of the world. It’s just that those background facts are certain, and von Neumann-Morgenstern utility functions like we are using are really designed to deal with uncertainty.
Another way to put it is that concave utility functions just mean something very different when applied to certain situations vs uncertain situations.
In the presence of certainty, saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives. That seems to be the position you are describing. I don’t resonate with that, because I think additional lives have constant value to me (if everything is certain).
But in the presence of uncertainty, saying that you have a concave utility function just means that you don’t like high-variance outcomes. That is the position I am taking. I don’t want to be screwed by tail outcomes. I want to hedge against them. If there were zero uncertainty, I would behave like my utility function was linear, but there is uncertainty, so I don’t.
I introduced this topic and wrote more about it in this shortform. I wanted to give the topic its own thread and see if others might have responses.
I don’t want to be screwed by tail outcomes. I want to hedge against them.
I do this too, but even despite the worlds size making my choices mostly only effecting value on the linear parts of my value function! Because tail outcomes are often large. (Maybe I mean something like kelly-betting/risk-aversion is often useful to fulfill instrumental subgoals too).
(Edit: and I think ‘correctly accounting for tail outcomes’ is just the correct way to deal with them).
saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives
Yes, though it’s not because additional lives are less intrinsically valuable, but because I have other values which are non-quantitative (narrative) and almost maxxed out way before there are very large numbers of lives.
A different way to say it would be that I value multiple things, but many of them don’t scale indefinitely with lives, so the overall function goes up faster at the start of the lives graph.
I agree that uncertainty alone doesn’t warrant separate treatment, and risk aversion is key.
(Before I get into the formal stuff, risk aversion to me just means placing a premium on hedging. I say this in advance because conversations about risk aversion vs risk neutrality tend to devolve into out-there comparisons like the St Petersburg paradox, and that’s never struck me as a particularly resonant way to think about it. I am risk averse for the same reason that most people are: it just feels important to hedge your bets.)
By risk aversion I mean a utility function that satisfies u(E[X])>E[u(X)]. Notably, that means that you can’t just take the expected value of lives saved across worlds when evaluating a decision – the distribution of how those lives are saved across worlds matters. I describe that more here.
For example, say my utility function over lives saved x is u(x)=√x. You offer me a choice between a charity that has a 10% chance to save 100 lives, and a charity that saves 5 lives with certainty. The utility of the former option to me is u(x)=0.1⋅√100=1, while the utility of the latter option is u(x)=1⋅√5. Thus, I choose the latter, even though it has lower expected lives saved (E[x]=0.1⋅100=10 for the former, E[x]=5 for the latter). What’s going on is that I am valuing certain impact over higher expected lives saved.
Apply this to the meat eater problem, where we have the choices
spend $10 on animal charities
spend $10 on development charities
spend $5 on each of them
If you’re risk neutral, 1) or 2) are the way to go – pick animals if your best bet is that animals are worth more (accounting for efficacy, room for funding, etc etc), and pick development if your best bet is that humans are worth more. But both options leave open the possibility that you are terribly wrong and you’ve wasted $10 or caused harm. Option 3) guarantees that you’ve created some positive value, regardless of whether animals or humans are worth more. If you’re risk-averse, that certain positive value is worth more than a higher expected value.
It sounds like we agree about what risk aversion is! The term I use that includes your example of valuing the square root of lives saved is a ‘concave utility function’. I have one of these, sort of; it goes up quickly for the first x lives (I’m not sure how large x is exactly), then becomes more linear.
But it’s unexpected to me for other EAs to value {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}. I tried to indicate in my comment that I think this might be the crux, given the size of the world.
(In your example of valuing the square root of lives saved (or lives per se), if there’s 1,000 good lives already, then preventing 16 deaths has a utility of 4 under the former, and √1000−√984 under the latter; and preventing 64 is twice as valuable under the former, but ~4x as valuable under the latter)
Your parenthetical clarifies that you just find it weird because you could add a constant inside the concave function and change the relative value of outcomes. I just don’t see any reason to do that? Why does the size of the world net of your decision determine the optimal decision?
The parenthetical isn’t why it’s unexpected, but clarifying how it’s actually different.
As an attempt at building intuition for why it matters, consider if an agent applied the ‘square of lives saved by me’ function newly to each action instead of keeping track of how many lives they’ve saved over their existence. Then this agent would gain more utility by taking four separate actions, each of which certainly save 1 life (for 1 utility each), than from one lone action that certainly saves 15 lives (for 3.87 utility). Then generalize this example to the case where they do keep track, and progress just ‘resets’ for new clones of them. Or the real-world case where there’s multiple agents with similar values.
I describe this starting from 6 paragraphs up in my edited long comment. I’m not sure if you read it pre- or post-edit.
I suppose that is a coherent worldview but I don’t share any of the intuitions that lead you to it.
Could you describe your intuitions? ‘valuing {amount of good lives saved by one’s own effect} rather than {amount of good lives per se}’ is really unintuitive to me.
To me, risk aversion is just a way of hedging your bets about the upsides and downsides of your decision. It doesn’t make sense to me to apply risk aversion to objects that feature no risk (background facts about the world, like its size). It has nothing to do with whether we value the size of the world. It’s just that those background facts are certain, and von Neumann-Morgenstern utility functions like we are using are really designed to deal with uncertainty.
Another way to put it is that concave utility functions just mean something very different when applied to certain situations vs uncertain situations.
In the presence of certainty, saying you have a concave utility function means you genuinely place lower value on additional lives given the presence of many lives. That seems to be the position you are describing. I don’t resonate with that, because I think additional lives have constant value to me (if everything is certain).
But in the presence of uncertainty, saying that you have a concave utility function just means that you don’t like high-variance outcomes. That is the position I am taking. I don’t want to be screwed by tail outcomes. I want to hedge against them. If there were zero uncertainty, I would behave like my utility function was linear, but there is uncertainty, so I don’t.
This is so interesting to me.
I introduced this topic and wrote more about it in this shortform. I wanted to give the topic its own thread and see if others might have responses.
I do this too, but even despite the worlds size making my choices mostly only effecting value on the linear parts of my value function! Because tail outcomes are often large. (Maybe I mean something like kelly-betting/risk-aversion is often useful to fulfill instrumental subgoals too).
(Edit: and I think ‘correctly accounting for tail outcomes’ is just the correct way to deal with them).
Yes, though it’s not because additional lives are less intrinsically valuable, but because I have other values which are non-quantitative (narrative) and almost maxxed out way before there are very large numbers of lives.
A different way to say it would be that I value multiple things, but many of them don’t scale indefinitely with lives, so the overall function goes up faster at the start of the lives graph.