Still it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances.
I’m very sympathetic to the idea that all we ought to be doing is to maximize the probability we achieve an infinite amount of value. And I’m also sympathetic to religion as a possible action plan there; the argument does not warrant the “incredulous stares” it typically gets in EA. But I don’t think it’s as simple as the above quote, for at least two reasons.
First, religious belief broadly specified could more often create infinite amounts of disvalue than infinite amounts of value, from a religious perspective. Consider for example the scenario in which non-believers get nothing, believers in the true god get plus infinity, and believers in false gods get minus infinity. Introducing negative infinities does wreck the analysis if we insist on maximizing expected utility, as Hajek points out, but not if we switch from EU to a decision theory based on stochastic dominance.
Second, and I think more importantly, religiosity might lower the probability of achieving infinite amounts of value in other ways. Belief in an imminent Second Coming, for instance, might lower the probability that we manage to create a civilization that lasts forever (and manages to permanently abolish suffering after a finite period).
Will read up on stochastic dominance, will presumably bring me back to my mirco days thinking about lotteries...
Note that I think there may be a way of dealing with it whilst staying in the expected utility framework. Where we ignore undefined expected utilities as they are not action guiding. Instead we focus on the part of our probability spaces where they don’t emerge. In this case I suggest we should only focus on worlds in which you can’t have both negative and positive infinities. We’d assume in our analysis that only one of them exists (you’d choose the one which is more plausible to exist on it’s own). Interested to hear whether you think that’s plausible.
On your second point I guess I doubt that sending a couple of thousand people into each religion would have big enough negative indirect effects to make it net negative. Obviously this would be hard to assess but I imagine we’d agree on the methodology?
I was just saying that, thankfully, I don’t think our decision problem is wrecked by the negative infinity cases, or the cases in which there are infinite amounts of positive and negative value. If it were, though, then okay—I’m not sure what the right response would be, but your approach of excluding everything from analysis but the “positive infinity only” cases (and not letting multiple infinities count for more) seems as reasonable as any, I suppose.
Within that framework, sure, having a few thousand believers in each religion would be better than having none. (It’s also better than having everyone believe in whichever religion seems most likely, of course.) I was just taking issue with “it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances”.
I’m very sympathetic to the idea that all we ought to be doing is to maximize the probability we achieve an infinite amount of value. And I’m also sympathetic to religion as a possible action plan there; the argument does not warrant the “incredulous stares” it typically gets in EA. But I don’t think it’s as simple as the above quote, for at least two reasons.
First, religious belief broadly specified could more often create infinite amounts of disvalue than infinite amounts of value, from a religious perspective. Consider for example the scenario in which non-believers get nothing, believers in the true god get plus infinity, and believers in false gods get minus infinity. Introducing negative infinities does wreck the analysis if we insist on maximizing expected utility, as Hajek points out, but not if we switch from EU to a decision theory based on stochastic dominance.
Second, and I think more importantly, religiosity might lower the probability of achieving infinite amounts of value in other ways. Belief in an imminent Second Coming, for instance, might lower the probability that we manage to create a civilization that lasts forever (and manages to permanently abolish suffering after a finite period).
Thanks @trammell.
Will read up on stochastic dominance, will presumably bring me back to my mirco days thinking about lotteries...
Note that I think there may be a way of dealing with it whilst staying in the expected utility framework. Where we ignore undefined expected utilities as they are not action guiding. Instead we focus on the part of our probability spaces where they don’t emerge. In this case I suggest we should only focus on worlds in which you can’t have both negative and positive infinities. We’d assume in our analysis that only one of them exists (you’d choose the one which is more plausible to exist on it’s own). Interested to hear whether you think that’s plausible.
On your second point I guess I doubt that sending a couple of thousand people into each religion would have big enough negative indirect effects to make it net negative. Obviously this would be hard to assess but I imagine we’d agree on the methodology?
I was just saying that, thankfully, I don’t think our decision problem is wrecked by the negative infinity cases, or the cases in which there are infinite amounts of positive and negative value. If it were, though, then okay—I’m not sure what the right response would be, but your approach of excluding everything from analysis but the “positive infinity only” cases (and not letting multiple infinities count for more) seems as reasonable as any, I suppose.
Within that framework, sure, having a few thousand believers in each religion would be better than having none. (It’s also better than having everyone believe in whichever religion seems most likely, of course.) I was just taking issue with “it might be best to encourage as many people as possible to adopt some form of religious belief to maximise our chances”.