Thanks Mal. I really liked your EAG talk and I’m very pleased this post can share the ideas more widely. I agree with ~everything here.
The “ecologically inert” perspective makes a good deal of sense to me, but I can also find it ~frustrating that a worldview with such a vast and ambitious moral convas (wide moral circle, serious consideration of cluelessness and backfire risks) tends to recommend such “tiny shifts at the margin”. So I really appreciated your paragraph about finding a decision theory that permits the possibility of radically transformative changes.
I’d be curious to have your take (and anyone else’s) on the following.
Say you have a friend who is buying and reselling items. She offers you the following deal A:
She gives you $20.
Whatever the net balance of her activity is at the end of the month, you share the benefits or losses with her, which can mean anything from -$1000 to +$1000 for you, and you have no expectation at all. You’re clueless. (Importantly, this does not mean EV = 0, but rather EV = ??? although within an order of magnitude of 1k.)
She also offers you deal B, where she gives you $1, and that’s it.
You want to maximize your money in a risk-neutral neutral way, and value money linearly, here. Also, assume we have a theory of bracketing that overcomes thesetwo problems in a way that makes bracketing recommend deal A.
Still, it is not clear whether you should follow bracketing, happily take the $20, and ignore the rest. Maybe you should prefer robustly good deal B, even though this means you have to accept avoiding transformative changes… I feel conflicted, here.
Thoughts? What are your intuitions in this case? And do you think our real-world situation with animals is disanalogous in a crucial way?
Thanks Jim, very interesting. I also feel conflicted, but lean towards taking A.[1]
Here’s how I feel about that:
Bracketing feels strange when it asks us to be led by consequences which are small in the grand scheme (e.g., +/- $1; Emily’s shoulder), and set aside consequences which are fairly proximate and which clearly dominate the stakes (e.g., +/- <=$1000; killing the terrorist/kid). It doesn’t feel so strange when our decision procedure calls on us to set aside consequences which dominate the stakes but don’t feel so proximate (e.g., longtermist concerns).
When I look at very specific cases, I can find it hard to tell when I’m dealing with standard expected value under uncertainty, and when I’ve run into Knightian uncertainty, cluelessness, etc. I’m bracketing out +/- <=$1000 when I say I take A, but I do feel drawn to treating this as a normal distribution around $0.
Ways in which it’s disaalogous to animals that might be important:
Animal welfare isn’t a one-shot problem. I think the best things we can do for animals involve calculated bets that integrate concern for their welfare into our decision-making more consistently, and teach us about improving their welfare more reliably.
I’m not sure we should be risk-neutral maximisers for animal welfare.
Conditional on being a risk neutral maximiser who values money linearly. In the real world, I’d shy away from A due to ambiguity aversion and because, to me, -$1000 matters more than +$1000.
Thanks Mal. I really liked your EAG talk and I’m very pleased this post can share the ideas more widely. I agree with ~everything here.
The “ecologically inert” perspective makes a good deal of sense to me, but I can also find it ~frustrating that a worldview with such a vast and ambitious moral convas (wide moral circle, serious consideration of cluelessness and backfire risks) tends to recommend such “tiny shifts at the margin”. So I really appreciated your paragraph about finding a decision theory that permits the possibility of radically transformative changes.
I’d be curious to have your take (and anyone else’s) on the following.
Say you have a friend who is buying and reselling items. She offers you the following deal A:
She gives you $20.
Whatever the net balance of her activity is at the end of the month, you share the benefits or losses with her, which can mean anything from -$1000 to +$1000 for you, and you have no expectation at all. You’re clueless. (Importantly, this does not mean EV = 0, but rather EV = ??? although within an order of magnitude of 1k.)
She also offers you deal B, where she gives you $1, and that’s it.
You want to maximize your money in a risk-neutral neutral way, and value money linearly, here. Also, assume we have a theory of bracketing that overcomes these two problems in a way that makes bracketing recommend deal A.
Still, it is not clear whether you should follow bracketing, happily take the $20, and ignore the rest. Maybe you should prefer robustly good deal B, even though this means you have to accept avoiding transformative changes… I feel conflicted, here.
Thoughts? What are your intuitions in this case? And do you think our real-world situation with animals is disanalogous in a crucial way?
Thanks Jim, very interesting. I also feel conflicted, but lean towards taking A.[1]
Here’s how I feel about that:
Bracketing feels strange when it asks us to be led by consequences which are small in the grand scheme (e.g., +/- $1; Emily’s shoulder), and set aside consequences which are fairly proximate and which clearly dominate the stakes (e.g., +/- <=$1000; killing the terrorist/kid). It doesn’t feel so strange when our decision procedure calls on us to set aside consequences which dominate the stakes but don’t feel so proximate (e.g., longtermist concerns).
When I look at very specific cases, I can find it hard to tell when I’m dealing with standard expected value under uncertainty, and when I’ve run into Knightian uncertainty, cluelessness, etc. I’m bracketing out +/- <=$1000 when I say I take A, but I do feel drawn to treating this as a normal distribution around $0.
Ways in which it’s disaalogous to animals that might be important:
Animal welfare isn’t a one-shot problem. I think the best things we can do for animals involve calculated bets that integrate concern for their welfare into our decision-making more consistently, and teach us about improving their welfare more reliably.
I’m not sure we should be risk-neutral maximisers for animal welfare.
Conditional on being a risk neutral maximiser who values money linearly. In the real world, I’d shy away from A due to ambiguity aversion and because, to me, -$1000 matters more than +$1000.