Things with a 50% chance of being very good aren’t pascal’s muggings! Your decision theory can’t be “Pascal’s muggin means I ignore everything with probability less than .5 of being good.”
I agree we don’t ignore everything with a probability of less than 0.5 of being good.
Can you clarify what you mean y “50% chance of being very good?”
1) Rethink priorities give Shrimp 23% chance of sentience 2) Their non-sentience adjusted welfare range than goes from 0 (at 5th percentile) to 1.095 (at 95% percentile). From zero to more than a human is such a large uncertainty range that I could accept arguments at this point that it might be “unworkable” like Henry says (personally I don’t think its unworkable) 3) Then After adjusting for sentience it looks like this.
Whatever way the cookie crumbles I think that’s a lot smaller than a “50% chance of being very good” and also a high uncertainty range.
Things with a 50% chance of being very good aren’t pascal’s muggings! Your decision theory can’t be “Pascal’s muggin means I ignore everything with probability less than .5 of being good.”
I agree we don’t ignore everything with a probability of less than 0.5 of being good.
Can you clarify what you mean y “50% chance of being very good?”
1) Rethink priorities give Shrimp 23% chance of sentience
2) Their non-sentience adjusted welfare range than goes from 0 (at 5th percentile) to 1.095 (at 95% percentile). From zero to more than a human is such a large uncertainty range that I could accept arguments at this point that it might be “unworkable” like Henry says (personally I don’t think its unworkable)
3) Then After adjusting for sentience it looks like this.
Whatever way the cookie crumbles I think that’s a lot smaller than a “50% chance of being very good” and also a high uncertainty range.