This is an interesting response, but doesn’t it run into a problem where you could have large amounts of evidence that Action X provides infinite payoff but have to ignore it.
Imagine really credible scientist/theologians discover there’s a 90% chance that X gives you infinite payoff and 90% chance Y gives you $5, but you feel obligated to grab the $5 just because you’re an infinity skeptic?
I also think this isn’t consistent with how people decide things in general- we didn’t need more evidence that COVID vaccines worked than flu vaccines worked, even though the expected utility from COVID vaccines was much higher.
It’s common sense that our prior for whether or not a technology will work for a given purpose depends on empiricism. This accounts for why we’d reject the million dollar post office run—we have abundant empirical and mechanistic evidence that offers of ~free money are typically lies or scams. Utility can be an inverse proxy for mechanistic plausibility, but only because of efficient market hypothesis-like considerations. If there was a $20 on the sidewalk, somebody would have already picked it up.
Right, the distinction between expected value from tech expected utility from offers from people makes sense. But I think your axiom still doesn’t provide enough reason to reject Pascal’s Wager.
I’m not sure if we can say we have good grounds to apply this discounting to God or the divine in general. Can we put that in the same bucket as human offers? I guess you could say yes by arguing that God is just a human invention but isn’t that like assuming the conclusion or something?
I don’t think probability declines as fast as promised value rises- a guy on the street offering me $1 Billion versus $100 million is about equally likely to be a scam, but the $$$ is different.
Because of how infinity works, wouldn’t I have to think there is a 100% chance that your axiom holds? Otherwise, I would think even if there’s only a 1% chance X God is real and a 1% chance that the expected value is infinite it still dominates everything.
I’m not sure about #1 or #3. I do think that #2 is false, again on mechanistic grounds. It’s harder to get a billion dollars than a million dollars, and that continues to apply as the sums of money offered grow larger.
Another way of putting it—the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?
“Another way of putting it—the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?”
Thanks for the example. Yes, I think you’ve convinced me on this point. I think I want to say something like “when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is.”
But I still think (and maybe this is going back to #1 a little) that this still has some issues. We don’t know how likely infinite payoffs are- some theist can say literally every human has achieved an infinite payoff- so I don’t think we can say infinite payoffs don’t happen. Outside religion, an infinite universe or multiverse maybe exists so if our actions are correlated with other people all our actions might produce an infinite payoff.
And if I did accept that we should discount infinite payoffs, I’m not sure the probability would fall fast enough to still get a finite payoff in expectation.
The word “produce” is causal language. It seems to me that even if our actions are correlated with other people, there’s no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to “produce” utility if we’re not causally in control of that production?
I guess it’s useful then to clarify which point we’re interested in.
I personally am interested in the question “given free will and personal control over the outcome, should we choose a strategy of pursuing infinite utility?”
I am less interested in “if you did not have control over the outcome, would you say it’s better if the universe was deterministically set up such that we are pursuing infinite utility?”
I suspect that the answer to some of these questions at an intersection between psychology and mathematics.
Our understanding of physics is empirical. Before making observations of the universe, we’d have no reason to entertain the hypothesis that “light exists.” There would be infinite possibilities, each infinitely unlikely.
Yet somehow, based on our observations, we find it wise to believe that our current understanding of how physics works is true. How did we go from a particular physics model being infinitely unlikely to it being considered almost certainly true, based on finite amounts of evidence?
It seems that we have a sort of mental “truth sensor,” which gets activated based on what we observe. A mathematician’s credence in the correctness of a proof is ultimately sourced from their “truth sensor” getting activated based on their observation of the consistency of the relationships within the proof.
So we might ultimately have to reframe this question as “why do/don’t arguments for Pascal’s Wager activate our ‘truth sensor’?”
This is an easier question to answer, at least for me. I see no compelling way to attack the problem, nobody else seems to either, I see the claims of world religions about how to achieve utility as being about as informative as taking advice from monkeys on typewriters, and accepting Pascal’s Wager seems deeply contrary to common sense. These are unfortunately only reasons not to spend time thinking more deeply about the problem, and don’t contribute in any productive way to moving toward a resolution :/
If we use normal decision-making rules that many people use, especially consequentialists, we find that Pascal’s wager is a pretty strong argument. There are many weak objections to and some more promising objections. But unless we’re certain of these objections it seems difficult to escape the weight of infinity.
If we look to other more informal ways to make decisions- favoring ideas that are popular, beneficial, and intuitive, then major religions that claim to offer a route to infinity are pretty popular, arguably beneficial, and theism in general seems more intuitive to most people than atheism
I think that given we have no strong reason to reject Pascal’s wager, I would suggest that people in general should do “due diligence” by investigating the claims and evidences for at least the major religions. If someone says hey I’ve spent 500 hours investigating Christianity and 500 hours investigating Islam and glanced at these other things and they all seem implausible… that’s one thing. But I think it’s hard (probably impossible) to justify not taking Pascal’s wager without substantially investigating religious claims.
If for instance, you end up think there’s 0.5% chance that Jesus was God or Mohammed was the messenger of God, that’s pretty substantial.
How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?
How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?
Great question.
Let me offer the idea of “universal common sense.”
“Common sense” is “the way most people look at things.” The way people commonly use this phrase today is what we might call “local common sense.” It is the common sense of the people who are currently alive and part of our culture.
Local common sense is useful for local questions. Universal common sense is useful for universal questions.
Since religion, as well as science, claim to be universal questions, we ought to rely on universal common sense. The galactic wisdom of crowds, if you will.
Of course, we can’t talk to people in the past or future. But even when we rely on local common sense, we are in some sense making a prediction about what our peers would say if we asked them the question we have in mind.
We can still make a prediction about what, say, a stone age person, or a person living 10,000 years in the future, would say if we asked them about whether Catholicism was real. The stone age person wouldn’t know what you’re talking about. The person 10,000 years in the future, I suspect, wouldn’t know either, as Catholicism might have largely vanished into history.
However, I expect that science will still be going strong 10,000 years in the future, if humanity lives to that point. And I expect that by then, vastly more people will believe (or have believed) in a form of scientific materialism than will believe in any particular religion. Hence, I predict that “universal common sense” is that we ought not spend much time at all investigating the truth of any particular religion.
I think imagining that current view X is justified, because one imagines that future generations will also believe in X is really unconvincing.
I think most people think their views will be more popular in the future. Liberal democrats and Communists have both argued that their view would dominate the world. I don’t think it adds anything other than illustrating the speaker is very confident of the merits of their worldview.
If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we’re in the last generation so there are no future humans, would you change your mind?
If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we’re in the last generation so there are no future humans, would you change your mind?
I’ve had a little more chance to flesh out this idea of “universal common sense.” I’m now thinking of it as “the wisdom of the best parts of the past, present, and future.”
Let’s say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view.
In the AI doom scenario, I think we should reject the common sense of the denizens of that future on matters pertaining to AI doom, as they weren’t wise enough to avoid doom.
In the Mormon scenario, I think that if the future is Mormon, then that suggests Mormonism would probably be a good thing. I generally trust people to steer toward good outcomes over time. Hence, if I believed this, then that would make me take Mormonism much more seriously.
I have a wide confidence interval for this notion of “universal common sense” being useful. Since you seem to be confidently against it, do you have futher objections to it? I appreciate the chance to explore it with a critical lens.
I’m not against it- I think it’s an okay way of framing something real. Your phrasing here is pretty sensible to me.
“Let’s say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view.”
But I have concerns about the future perspective, in theory and practice.
I think people will just assert future people will agree with them. You think future people will agree with you, I think future people will agree with me. There’s no way to settle that dispute conclusively (maybe expert predictions or a prediction market can point to some answer), so I think imagining the future perspective is basically worthless.
In contrast, we can look at people today or in the past (contingent on historical records). The widespread belief in the divine is, I think, at least another piece of (weak?) evidence that points to taking the wager. This could be weakened if secular societies or institutions were much more successful than their contemporaries.
“My view makes perfect sense, contemporary culture is crazy, and history will bear me out when my perspective becomes a durable new form of common sense” is a statement that, while it scans as arrogant, could easily be true—and has been many times in the past. It at least explains why a person who ascribes to “social intelligence” as a guide might still hold many counterintuitive opinions. I agree with you though that it’s not useful for settling disputes when people disagree in their predictions about “universal common sense.”
If you believe that current and past common sense is a better guide, then doesn’t that work against Pascal’s Wager? I mean, how many people now, or in the past, would agree with you that Pascal’s Wager is a good idea? I think it has stuck around in part because it’s so counterintuitive. We don’t exactly see a ton of deathbed conversions, much less for game-theoretic reasons.
I would say if we use other people’s judgment as a guide for our own, it’s an argument for the belief in the divine/God/the supernatural and it becomes hard to say Christianity and Islam have negligible probability. So rules that are like “ignore tiny probability” don’t work. Your idea of discounting probability as utility rises still works but we’ve talked about why I don’t think that’s compelling enough.
I don’t have good survey evidence on Pascal’s Wager, but I think a lot of religious believers would agree with the general concept- don’t risk your soul, life is short and eternity is long, and other phrases like that seem to reference the basic idea.
This guy converted on his deathbed because of the wager (John von Neumann).
This is an interesting response, but doesn’t it run into a problem where you could have large amounts of evidence that Action X provides infinite payoff but have to ignore it.
Imagine really credible scientist/theologians discover there’s a 90% chance that X gives you infinite payoff and 90% chance Y gives you $5, but you feel obligated to grab the $5 just because you’re an infinity skeptic?
I also think this isn’t consistent with how people decide things in general- we didn’t need more evidence that COVID vaccines worked than flu vaccines worked, even though the expected utility from COVID vaccines was much higher.
This is a good response!
It’s common sense that our prior for whether or not a technology will work for a given purpose depends on empiricism. This accounts for why we’d reject the million dollar post office run—we have abundant empirical and mechanistic evidence that offers of ~free money are typically lies or scams. Utility can be an inverse proxy for mechanistic plausibility, but only because of efficient market hypothesis-like considerations. If there was a $20 on the sidewalk, somebody would have already picked it up.
Right, the distinction between expected value from tech expected utility from offers from people makes sense. But I think your axiom still doesn’t provide enough reason to reject Pascal’s Wager.
I’m not sure if we can say we have good grounds to apply this discounting to God or the divine in general. Can we put that in the same bucket as human offers? I guess you could say yes by arguing that God is just a human invention but isn’t that like assuming the conclusion or something?
I don’t think probability declines as fast as promised value rises- a guy on the street offering me $1 Billion versus $100 million is about equally likely to be a scam, but the $$$ is different.
Because of how infinity works, wouldn’t I have to think there is a 100% chance that your axiom holds? Otherwise, I would think even if there’s only a 1% chance X God is real and a 1% chance that the expected value is infinite it still dominates everything.
I’m not sure about #1 or #3. I do think that #2 is false, again on mechanistic grounds. It’s harder to get a billion dollars than a million dollars, and that continues to apply as the sums of money offered grow larger.
Another way of putting it—the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?
“Another way of putting it—the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?”
Thanks for the example. Yes, I think you’ve convinced me on this point. I think I want to say something like “when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is.”
But I still think (and maybe this is going back to #1 a little) that this still has some issues. We don’t know how likely infinite payoffs are- some theist can say literally every human has achieved an infinite payoff- so I don’t think we can say infinite payoffs don’t happen. Outside religion, an infinite universe or multiverse maybe exists so if our actions are correlated with other people all our actions might produce an infinite payoff.
And if I did accept that we should discount infinite payoffs, I’m not sure the probability would fall fast enough to still get a finite payoff in expectation.
The word “produce” is causal language. It seems to me that even if our actions are correlated with other people, there’s no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to “produce” utility if we’re not causally in control of that production?
Yes, I feel comfortable saying if the EV changes based on our action, we are responsible in some sense or produced it.
In Newcomb’s paradox, I think you can “produce” additional dollars.
I guess it’s useful then to clarify which point we’re interested in.
I personally am interested in the question “given free will and personal control over the outcome, should we choose a strategy of pursuing infinite utility?”
I am less interested in “if you did not have control over the outcome, would you say it’s better if the universe was deterministically set up such that we are pursuing infinite utility?”
Are you interested in the second question?
I’m mostly interested in the first. I think people should take Pascal’s wager!
I suspect that the answer to some of these questions at an intersection between psychology and mathematics.
Our understanding of physics is empirical. Before making observations of the universe, we’d have no reason to entertain the hypothesis that “light exists.” There would be infinite possibilities, each infinitely unlikely.
Yet somehow, based on our observations, we find it wise to believe that our current understanding of how physics works is true. How did we go from a particular physics model being infinitely unlikely to it being considered almost certainly true, based on finite amounts of evidence?
It seems that we have a sort of mental “truth sensor,” which gets activated based on what we observe. A mathematician’s credence in the correctness of a proof is ultimately sourced from their “truth sensor” getting activated based on their observation of the consistency of the relationships within the proof.
So we might ultimately have to reframe this question as “why do/don’t arguments for Pascal’s Wager activate our ‘truth sensor’?”
This is an easier question to answer, at least for me. I see no compelling way to attack the problem, nobody else seems to either, I see the claims of world religions about how to achieve utility as being about as informative as taking advice from monkeys on typewriters, and accepting Pascal’s Wager seems deeply contrary to common sense. These are unfortunately only reasons not to spend time thinking more deeply about the problem, and don’t contribute in any productive way to moving toward a resolution :/
I would put it a different way.
If we use normal decision-making rules that many people use, especially consequentialists, we find that Pascal’s wager is a pretty strong argument. There are many weak objections to and some more promising objections. But unless we’re certain of these objections it seems difficult to escape the weight of infinity.
If we look to other more informal ways to make decisions- favoring ideas that are popular, beneficial, and intuitive, then major religions that claim to offer a route to infinity are pretty popular, arguably beneficial, and theism in general seems more intuitive to most people than atheism
I think that given we have no strong reason to reject Pascal’s wager, I would suggest that people in general should do “due diligence” by investigating the claims and evidences for at least the major religions. If someone says hey I’ve spent 500 hours investigating Christianity and 500 hours investigating Islam and glanced at these other things and they all seem implausible… that’s one thing. But I think it’s hard (probably impossible) to justify not taking Pascal’s wager without substantially investigating religious claims.
If for instance, you end up think there’s 0.5% chance that Jesus was God or Mohammed was the messenger of God, that’s pretty substantial.
How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?
Great question.
Let me offer the idea of “universal common sense.”
“Common sense” is “the way most people look at things.” The way people commonly use this phrase today is what we might call “local common sense.” It is the common sense of the people who are currently alive and part of our culture.
Local common sense is useful for local questions. Universal common sense is useful for universal questions.
Since religion, as well as science, claim to be universal questions, we ought to rely on universal common sense. The galactic wisdom of crowds, if you will.
Of course, we can’t talk to people in the past or future. But even when we rely on local common sense, we are in some sense making a prediction about what our peers would say if we asked them the question we have in mind.
We can still make a prediction about what, say, a stone age person, or a person living 10,000 years in the future, would say if we asked them about whether Catholicism was real. The stone age person wouldn’t know what you’re talking about. The person 10,000 years in the future, I suspect, wouldn’t know either, as Catholicism might have largely vanished into history.
However, I expect that science will still be going strong 10,000 years in the future, if humanity lives to that point. And I expect that by then, vastly more people will believe (or have believed) in a form of scientific materialism than will believe in any particular religion. Hence, I predict that “universal common sense” is that we ought not spend much time at all investigating the truth of any particular religion.
I think imagining that current view X is justified, because one imagines that future generations will also believe in X is really unconvincing.
I think most people think their views will be more popular in the future. Liberal democrats and Communists have both argued that their view would dominate the world. I don’t think it adds anything other than illustrating the speaker is very confident of the merits of their worldview.
If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we’re in the last generation so there are no future humans, would you change your mind?
I’ve had a little more chance to flesh out this idea of “universal common sense.” I’m now thinking of it as “the wisdom of the best parts of the past, present, and future.”
Let’s say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view.
In the AI doom scenario, I think we should reject the common sense of the denizens of that future on matters pertaining to AI doom, as they weren’t wise enough to avoid doom.
In the Mormon scenario, I think that if the future is Mormon, then that suggests Mormonism would probably be a good thing. I generally trust people to steer toward good outcomes over time. Hence, if I believed this, then that would make me take Mormonism much more seriously.
I have a wide confidence interval for this notion of “universal common sense” being useful. Since you seem to be confidently against it, do you have futher objections to it? I appreciate the chance to explore it with a critical lens.
I’m not against it- I think it’s an okay way of framing something real. Your phrasing here is pretty sensible to me.
“Let’s say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be.
Given all three of these assumptions, then I think we should consider adopting that point of view.”
But I have concerns about the future perspective, in theory and practice.
I think people will just assert future people will agree with them. You think future people will agree with you, I think future people will agree with me. There’s no way to settle that dispute conclusively (maybe expert predictions or a prediction market can point to some answer), so I think imagining the future perspective is basically worthless.
In contrast, we can look at people today or in the past (contingent on historical records). The widespread belief in the divine is, I think, at least another piece of (weak?) evidence that points to taking the wager. This could be weakened if secular societies or institutions were much more successful than their contemporaries.
“My view makes perfect sense, contemporary culture is crazy, and history will bear me out when my perspective becomes a durable new form of common sense” is a statement that, while it scans as arrogant, could easily be true—and has been many times in the past. It at least explains why a person who ascribes to “social intelligence” as a guide might still hold many counterintuitive opinions. I agree with you though that it’s not useful for settling disputes when people disagree in their predictions about “universal common sense.”
If you believe that current and past common sense is a better guide, then doesn’t that work against Pascal’s Wager? I mean, how many people now, or in the past, would agree with you that Pascal’s Wager is a good idea? I think it has stuck around in part because it’s so counterintuitive. We don’t exactly see a ton of deathbed conversions, much less for game-theoretic reasons.
I would say if we use other people’s judgment as a guide for our own, it’s an argument for the belief in the divine/God/the supernatural and it becomes hard to say Christianity and Islam have negligible probability. So rules that are like “ignore tiny probability” don’t work. Your idea of discounting probability as utility rises still works but we’ve talked about why I don’t think that’s compelling enough.
I don’t have good survey evidence on Pascal’s Wager, but I think a lot of religious believers would agree with the general concept- don’t risk your soul, life is short and eternity is long, and other phrases like that seem to reference the basic idea.
This guy converted on his deathbed because of the wager (John von Neumann).