What’s the basis for using expected utility/value calculations when allocating EA funding for “one off” bets? More details explaining what I don’t understand are below for context.
My understanding is expected value relies on the law of large numbers, so in situations where you have bets that are unlikely to be repeated (for example, extinction, where you could put a ton of resources into it and go from a 5% extinction risk over the next century to a 4% risk) it doesn’t seem like expected value should hold. The way I’ve seen this justified is using expected utility and the Von Neumann Morgenstern (VNM) theorem which I believe says that a utility function exists that follows rationale principles and that they’ve proved that it’s optimal to maximize expected utility in that situation.
However, it seems like that doesn’t really tell us much, because maybe you could construct a number of utility functions that satisfy VNM, and some bankrupt you and some don’t. It seems reasonable to me that a good utility function should discount bets that will rarely be repeated at that scale and would be unlikely to average out positively in the long run since they won’t be repeated enough times. But as far as I’m aware EA expected utility/value calculations often don’t account for that.
It seems like people refer to attempts to account for that as risk-aversion, and my understanding is EAs often argue that we should be risk-neutral. But the arguments I’ve seen typically seem to frame risk-aversion as putting an upper bound on valuing people’s well-being and that we don’t want to do that. But it seems to me like you could value well-being linearly, but also factor in that you should downweight bets that won’t be repeated enough to average out in your favor.
Apologies for the lengthy context, I’m sure I’m confused on a lot of points so any clarity or explanations on what I’m missing would be appreciated!
What’s the basis for using expected utility/value calculations when allocating EA funding for “one off” bets? More details explaining what I don’t understand are below for context.
My understanding is expected value relies on the law of large numbers, so in situations where you have bets that are unlikely to be repeated (for example, extinction, where you could put a ton of resources into it and go from a 5% extinction risk over the next century to a 4% risk) it doesn’t seem like expected value should hold. The way I’ve seen this justified is using expected utility and the Von Neumann Morgenstern (VNM) theorem which I believe says that a utility function exists that follows rationale principles and that they’ve proved that it’s optimal to maximize expected utility in that situation.
However, it seems like that doesn’t really tell us much, because maybe you could construct a number of utility functions that satisfy VNM, and some bankrupt you and some don’t. It seems reasonable to me that a good utility function should discount bets that will rarely be repeated at that scale and would be unlikely to average out positively in the long run since they won’t be repeated enough times. But as far as I’m aware EA expected utility/value calculations often don’t account for that.
It seems like people refer to attempts to account for that as risk-aversion, and my understanding is EAs often argue that we should be risk-neutral. But the arguments I’ve seen typically seem to frame risk-aversion as putting an upper bound on valuing people’s well-being and that we don’t want to do that. But it seems to me like you could value well-being linearly, but also factor in that you should downweight bets that won’t be repeated enough to average out in your favor.
Apologies for the lengthy context, I’m sure I’m confused on a lot of points so any clarity or explanations on what I’m missing would be appreciated!
It’s been a while since I read it but Joe Carlsmith’s series on expected utility might help some.
Thanks, I’ll check that out!