Just because the universe is very big doesn’t mean we are very near the bound. We’d only be very near the bound if the universe was both very big and very perfect, i.e. suffering, injustice, etc. all practically nonexistent as a fraction of things happening.
My thought was that you’d need a large universe consisting of people like us to be very near the bound, otherwise you couldn’t use boundedness to get out of assigning a high expected value to the example projects I proposed. There might be ways of finessing the dimensions of boundedness to avoid this sort of concern, but I’m skeptical (though I haven’t thought about it much).
I also find it methodologically dubious to adjust your value function to fit what actions you think you should do. It feels to me like your value function should be your value function, and you should adjust your decision rules if they produce a bad verdict. If your value function is bounded, so be it. But don’t cut it off to make expected value maximization more palatable.
If the math checks out then I’ll keep my bounded utility function but also maybe add in some nonconsequentialist-ish stuff to cover this case and cases like it.
I can see why you might do this, but it feels strange to me. The reason to save the child isn’t because its a good thing for the child not to drown, but because there’s some rule that you’re supposed to follow that tells you to save the kid? Do these rules happen to require you to act in ways that basically align with what a total utilitarian would do, or do they have the sort of oddities that afflict deontological views (e.g. don’t lie to the murderer at the door)?
My thought was that you’d need a large universe consisting of people like us to be very near the bound, otherwise you couldn’t use boundedness to get out of assigning a high expected value to the example projects I proposed. There might be ways of finessing the dimensions of boundedness to avoid this sort of concern, but I’m skeptical (though I haven’t thought about it much).
I also find it methodologically dubious to adjust your value function to fit what actions you think you should do. It feels to me like your value function should be your value function, and you should adjust your decision rules if they produce a bad verdict. If your value function is bounded, so be it. But don’t cut it off to make expected value maximization more palatable.
I can see why you might do this, but it feels strange to me. The reason to save the child isn’t because its a good thing for the child not to drown, but because there’s some rule that you’re supposed to follow that tells you to save the kid? Do these rules happen to require you to act in ways that basically align with what a total utilitarian would do, or do they have the sort of oddities that afflict deontological views (e.g. don’t lie to the murderer at the door)?