“Another way of putting it—the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?”
Thanks for the example. Yes, I think you’ve convinced me on this point. I think I want to say something like “when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is.”
But I still think (and maybe this is going back to #1 a little) that this still has some issues. We don’t know how likely infinite payoffs are- some theist can say literally every human has achieved an infinite payoff- so I don’t think we can say infinite payoffs don’t happen. Outside religion, an infinite universe or multiverse maybe exists so if our actions are correlated with other people all our actions might produce an infinite payoff.
And if I did accept that we should discount infinite payoffs, I’m not sure the probability would fall fast enough to still get a finite payoff in expectation.
The word “produce” is causal language. It seems to me that even if our actions are correlated with other people, there’s no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to “produce” utility if we’re not causally in control of that production?
I guess it’s useful then to clarify which point we’re interested in.
I personally am interested in the question “given free will and personal control over the outcome, should we choose a strategy of pursuing infinite utility?”
I am less interested in “if you did not have control over the outcome, would you say it’s better if the universe was deterministically set up such that we are pursuing infinite utility?”
“Another way of putting it—the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?”
Thanks for the example. Yes, I think you’ve convinced me on this point. I think I want to say something like “when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is.”
But I still think (and maybe this is going back to #1 a little) that this still has some issues. We don’t know how likely infinite payoffs are- some theist can say literally every human has achieved an infinite payoff- so I don’t think we can say infinite payoffs don’t happen. Outside religion, an infinite universe or multiverse maybe exists so if our actions are correlated with other people all our actions might produce an infinite payoff.
And if I did accept that we should discount infinite payoffs, I’m not sure the probability would fall fast enough to still get a finite payoff in expectation.
The word “produce” is causal language. It seems to me that even if our actions are correlated with other people, there’s no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to “produce” utility if we’re not causally in control of that production?
Yes, I feel comfortable saying if the EV changes based on our action, we are responsible in some sense or produced it.
In Newcomb’s paradox, I think you can “produce” additional dollars.
I guess it’s useful then to clarify which point we’re interested in.
I personally am interested in the question “given free will and personal control over the outcome, should we choose a strategy of pursuing infinite utility?”
I am less interested in “if you did not have control over the outcome, would you say it’s better if the universe was deterministically set up such that we are pursuing infinite utility?”
Are you interested in the second question?
I’m mostly interested in the first. I think people should take Pascal’s wager!