Thanks Jim, very interesting. I also feel conflicted, but lean towards taking A.[1]
Here’s how I feel about that:
Bracketing feels strange when it asks us to be led by consequences which are small in the grand scheme (e.g., +/- $1; Emily’s shoulder), and set aside consequences which are fairly proximate and which clearly dominate the stakes (e.g., +/- <=$1000; killing the terrorist/kid). It doesn’t feel so strange when our decision procedure calls on us to set aside consequences which dominate the stakes but don’t feel so proximate (e.g., longtermist concerns).
When I look at very specific cases, I can find it hard to tell when I’m dealing with standard expected value under uncertainty, and when I’ve run into Knightian uncertainty, cluelessness, etc. I’m bracketing out +/- <=$1000 when I say I take A, but I do feel drawn to treating this as a normal distribution around $0.
Ways in which it’s disaalogous to animals that might be important:
Animal welfare isn’t a one-shot problem. I think the best things we can do for animals involve calculated bets that integrate concern for their welfare into our decision-making more consistently, and teach us about improving their welfare more reliably.
I’m not sure we should be risk-neutral maximisers for animal welfare.
Conditional on being a risk neutral maximiser who values money linearly. In the real world, I’d shy away from A due to ambiguity aversion and because, to me, -$1000 matters more than +$1000.
Thanks Jim, very interesting. I also feel conflicted, but lean towards taking A.[1]
Here’s how I feel about that:
Bracketing feels strange when it asks us to be led by consequences which are small in the grand scheme (e.g., +/- $1; Emily’s shoulder), and set aside consequences which are fairly proximate and which clearly dominate the stakes (e.g., +/- <=$1000; killing the terrorist/kid). It doesn’t feel so strange when our decision procedure calls on us to set aside consequences which dominate the stakes but don’t feel so proximate (e.g., longtermist concerns).
When I look at very specific cases, I can find it hard to tell when I’m dealing with standard expected value under uncertainty, and when I’ve run into Knightian uncertainty, cluelessness, etc. I’m bracketing out +/- <=$1000 when I say I take A, but I do feel drawn to treating this as a normal distribution around $0.
Ways in which it’s disaalogous to animals that might be important:
Animal welfare isn’t a one-shot problem. I think the best things we can do for animals involve calculated bets that integrate concern for their welfare into our decision-making more consistently, and teach us about improving their welfare more reliably.
I’m not sure we should be risk-neutral maximisers for animal welfare.
Conditional on being a risk neutral maximiser who values money linearly. In the real world, I’d shy away from A due to ambiguity aversion and because, to me, -$1000 matters more than +$1000.