My perspective on the issue is that by accepting the wager, you are likely to become far less effective at achieving your terminal goals, (since even if you can discount higher-probability wagers, there will eventually be a lower-probability one that you won’t be able to think your way out of and thus have to entertain on principle), and become vulnerable to adversarial attacks, leading to actions which in the vast majority of possible universes are losing moves.
If your epistemics require that you spend all your money on projects that will, for all intents and purposes do nothing (and which if universally followed would lead to a clearly dystopian world where only muggers get money), then I’d wager that the epistemics are the problem. Rationalists, and EAs, should play to win, and not fall prey to obvious basilisks of our own making.
I think this is true for some people, but not for most people. Religion seems helpful for happiness, health, having a family, etc which are some of the most common terminal goals out there.
This is a good point, although I would argue that the reasons why practicing religion has these advantages is unrelated to it being a case of Pascal’s wager (if we let Pascal’s wager stand for promises of infinite value in general).
This argument is one that makes intuitive sense, and of course I am no exception to that intuition. However intuition is not the path to truth, logic is. Unless you can provide a logic-founded reason why almost certain loss with a minuscule chance of a huge win is worse than unlikely loss with a probable win, then I can’t accept the argument.
Although I don’t think Yitz’s comment is persuasive, I don’t think your response is either. What’s the “logic-founded” reason for accepting the wager? You might say expected value theory, but then, it’s possible to ask what the reason for that is, etc. It’s intuition all the way down.
That’s true but I think we need to make the least number of intuition based assumptions possible. Yitz’s suggestion adds an extra assumption ON TOP of expected value theory, so I would need a reason to add that assumption.
Oops I got mixed up and that response related to a totally different comment. See my reply below for my actual response
Expected value theory recommends sometimes taking bets that we expect to lose.
We should not adopt decision theories that recommend sometimes taking bets that we expect to lose.
You reject 3.
Yitz rejects 1.
This is not a matter of making more or fewer assumptions. Instead, it’s a matter of weighing which of the propositions one finds least plausible. There may be further arguments to be made for or against any of these points, but it will eventually bottom out at intuitions.
Oh wait sorry I got confused with totally different comment that did add an extra assumption. My bad...
As for the actual comment this thread is about, expected value theory can be derived from the axioms of VNM-rationality (which I know nothing about btw), whereas proposition 3 is not really based on anything as far as I’m aware, it’s just a kind of vague axiom of itself. I feel we should restrain from using intuitions as much as possible except when forced to at the most fundamental level of logic — like how we don’t just assume 1+1=2, we reduce it to a more fundamental level of assumptions: the ZFC axioms.
In summary, propositions 1 and 3 are mutually exclusive, and I think 1 should be accepted more readily due to it being founded in a more fundamental level of assumptions.
Then it becomes a choice of accepting the VNM axioms or proposition 3 above.
Like I said, I agree that we should reject 3, but the reason for rejecting 3 is not because it is based on intuition (or based on a non-fundamental intuition). The reason is because it’s a less plausible intuition relative to others. For example, one of the VNM axioms is transitivity: if A is preferable to B, and B is preferable to C, then A is preferable to C.
That’s just much more plausible than the Yitz’s suggestion that we shouldn’t be “vulnerable to adversarial attacks” or whatever.
It’s also worth noting that your justification for accepting expected value theory is not based on the VNM axioms, since you know nothing about them! Your justification is based on a) your own intuition that it seems correct and b) the testimony of the smart people you’ve encountered who say it’s a good decision theory.
My perspective on the issue is that by accepting the wager, you are likely to become far less effective at achieving your terminal goals, (since even if you can discount higher-probability wagers, there will eventually be a lower-probability one that you won’t be able to think your way out of and thus have to entertain on principle), and become vulnerable to adversarial attacks, leading to actions which in the vast majority of possible universes are losing moves. If your epistemics require that you spend all your money on projects that will, for all intents and purposes do nothing (and which if universally followed would lead to a clearly dystopian world where only muggers get money), then I’d wager that the epistemics are the problem. Rationalists, and EAs, should play to win, and not fall prey to obvious basilisks of our own making.
I think this is true for some people, but not for most people. Religion seems helpful for happiness, health, having a family, etc which are some of the most common terminal goals out there.
This is a good point, although I would argue that the reasons why practicing religion has these advantages is unrelated to it being a case of Pascal’s wager (if we let Pascal’s wager stand for promises of infinite value in general).
This argument is one that makes intuitive sense, and of course I am no exception to that intuition. However intuition is not the path to truth, logic is. Unless you can provide a logic-founded reason why almost certain loss with a minuscule chance of a huge win is worse than unlikely loss with a probable win, then I can’t accept the argument.
Although I don’t think Yitz’s comment is persuasive, I don’t think your response is either. What’s the “logic-founded” reason for accepting the wager? You might say expected value theory, but then, it’s possible to ask what the reason for that is, etc. It’s intuition all the way down.
That’s true but I think we need to make the least number of intuition based assumptions possible. Yitz’s suggestion adds an extra assumption ON TOP of expected value theory, so I would need a reason to add that assumption.Oops I got mixed up and that response related to a totally different comment. See my reply below for my actual response
Consider three propositions:
Expected value theory is true.
Expected value theory recommends sometimes taking bets that we expect to lose.
We should not adopt decision theories that recommend sometimes taking bets that we expect to lose.
You reject 3.
Yitz rejects 1.
This is not a matter of making more or fewer assumptions. Instead, it’s a matter of weighing which of the propositions one finds least plausible. There may be further arguments to be made for or against any of these points, but it will eventually bottom out at intuitions.
Oh wait sorry I got confused with totally different comment that did add an extra assumption. My bad...
As for the actual comment this thread is about, expected value theory can be derived from the axioms of VNM-rationality (which I know nothing about btw), whereas proposition 3 is not really based on anything as far as I’m aware, it’s just a kind of vague axiom of itself. I feel we should restrain from using intuitions as much as possible except when forced to at the most fundamental level of logic — like how we don’t just assume 1+1=2, we reduce it to a more fundamental level of assumptions: the ZFC axioms.
In summary, propositions 1 and 3 are mutually exclusive, and I think 1 should be accepted more readily due to it being founded in a more fundamental level of assumptions.
Then it becomes a choice of accepting the VNM axioms or proposition 3 above.
Like I said, I agree that we should reject 3, but the reason for rejecting 3 is not because it is based on intuition (or based on a non-fundamental intuition). The reason is because it’s a less plausible intuition relative to others. For example, one of the VNM axioms is transitivity: if A is preferable to B, and B is preferable to C, then A is preferable to C.
That’s just much more plausible than the Yitz’s suggestion that we shouldn’t be “vulnerable to adversarial attacks” or whatever.
It’s also worth noting that your justification for accepting expected value theory is not based on the VNM axioms, since you know nothing about them! Your justification is based on a) your own intuition that it seems correct and b) the testimony of the smart people you’ve encountered who say it’s a good decision theory.
Yes this is exactly what I’m saying