Thanks for the post—I’d like to see more people thinking about the consequences of “fanaticism”. But I should notice that discussions about Pascal’s Wagers have been running for a long time among rationalists, decision theorists and philosophers—and even in this community.
I disagree a bit with the conclusions. Sorry if this is too brief:
(1) is probably right, but I’m not sure this can based on the reductio presented on this post;
(2) is probably wrong. I think the best theory of rationality probably converges with the best theory of reasonableness—it would show why bounded rational cooperators should display this trait. But it’s an interesting distinction to have in mind.
(3) I guess most consequentialists would agree that expected utility shouldn’t always guide behaviour directly. They might distinguish between the correctness of an action (what you should do) and its value; your case fails to point out why the value of a Pascal’s Wager-like scenario shouldn’t be assessed with expected utility. But what’s really weird is that the usual alternative to expected value is common sense deontic reasoning , which is often taken to claim that you should / could do a certain action A, no matter what its consequences or chances are: pereat mundus, fiat justitia. I fail to see why this shouldn’t be called “fanatical”, too.
(4) I’m very inclined to agree with this when we are dealing with very uncertain subjective probability distributions, and even with objective probabilities with very high variance (like Saint Petersburg Paradox). I’m not sure the same would apply to well-defined frequencies—so I wouldn’t proscribe a lottery with a probability of 10^(-12).
That being said, it’s been a long time since I last checked on the state of the matter… but the main lesson I learned about PW was that ideas should “pay rent” to be in our heads (I think Yudkowsky mentioned it while writing about a PW’s scenario). So the often neglected issue with PW scenarios is that it’s hard to account for their opportunity costs—and they are potentially infinite, precisely because it’s so cheap to formulate them. For instance, if I am willing assign a relevant credence to a random person who tries to Pascal-mug me, then not only I can be mugged by anyone, I also have to assing some probability to events like:
The world will become the ultimate Paradise / Hell iff I voice a certain sequence of characters in the next n seconds.
Maybe there’s a shy god around waiting for our prayer.
That being said, it’s been a long time since I last checked on the state of the matter… but the main lesson I learned about PW was that ideas should “pay rent” to be in our heads (I think Yudkowsky mentioned it while writing about a PW’s scenario). So the often neglected issue with PW scenarios is that it’s hard to account for their opportunity costs—and they are potentially infinite, precisely because it’s so cheap to formulate them.
Pascal’s wager is somewhat fraught, and what you should make of it may turn on what you think about humility, religious epistemology, and the space of plausible religions. What’s so interesting about the MWI project is that it isn’t like this. It isn’t some theory concocted from nothing and assigned a probability. There’s at least some evidence that something in the ballpark of the theory is true. And it’s not easy to come up with an approximately as plausible hypothesis that suggests that the actions which might cause branchings might instead prevent them, or that we have alternative choices might lead to massive amounts of value in other ways.
If you grant that MWI is coherent, then I think you should be open to the possibility that it isn’t unique, and there are other hypotheses that suggest possible projects that are much more likely to create massive amounts of value than prevent it.
If you grant that MWI is coherent, then I think you should be open to the possibility that it isn’t unique, and there are other hypotheses that suggest possible projects that are much more likely to create massive amounts of value than prevent it.
Actually, I didn’t address your argument from MWI because I suspect we couldn’t make any difference. Maybe I’m wrong (it’s way beyond my expertise), but quantum branching events would be happening all the time, so either (i) there are (or will be) infinite worlds, whatever we do, and then the problem here is more about infinite ethics than Fanaticism, or (ii) there is a limit to the number of possible branches—which I guess will (most likely) be achieved whatever we do. So it’s not clear to me that we would gain additional utility by creating more branching events.
[And yet, the modus ponens of one philosopher is the modus tollens of another one… rat/EAs have actually been discussing the potential implications of weird physics: here, here...]
However, I’m not sure the problem I identified with PW’s (i.e., take opportunity costs seriously) wouldn’t apply here, too… if we are to act conditioned on MWI being true, then we should do the same for every theory that could be true with similar odds. But how strong should this “could” be? Like “we could be living in a simulation”? And how long until you face a “basilisk”, or just someone using motivated reasoning?
As Carl Shulman points out, this might be a case of “applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequence.”
Thanks for the post—I’d like to see more people thinking about the consequences of “fanaticism”. But I should notice that discussions about Pascal’s Wagers have been running for a long time among rationalists, decision theorists and philosophers—and even in this community.
I disagree a bit with the conclusions. Sorry if this is too brief:
(1) is probably right, but I’m not sure this can based on the reductio presented on this post;
(2) is probably wrong. I think the best theory of rationality probably converges with the best theory of reasonableness—it would show why bounded rational cooperators should display this trait. But it’s an interesting distinction to have in mind.
(3) I guess most consequentialists would agree that expected utility shouldn’t always guide behaviour directly. They might distinguish between the correctness of an action (what you should do) and its value; your case fails to point out why the value of a Pascal’s Wager-like scenario shouldn’t be assessed with expected utility. But what’s really weird is that the usual alternative to expected value is common sense deontic reasoning , which is often taken to claim that you should / could do a certain action A, no matter what its consequences or chances are: pereat mundus, fiat justitia. I fail to see why this shouldn’t be called “fanatical”, too.
(4) I’m very inclined to agree with this when we are dealing with very uncertain subjective probability distributions, and even with objective probabilities with very high variance (like Saint Petersburg Paradox). I’m not sure the same would apply to well-defined frequencies—so I wouldn’t proscribe a lottery with a probability of 10^(-12).
That being said, it’s been a long time since I last checked on the state of the matter… but the main lesson I learned about PW was that ideas should “pay rent” to be in our heads (I think Yudkowsky mentioned it while writing about a PW’s scenario). So the often neglected issue with PW scenarios is that it’s hard to account for their opportunity costs—and they are potentially infinite, precisely because it’s so cheap to formulate them. For instance, if I am willing assign a relevant credence to a random person who tries to Pascal-mug me, then not only I can be mugged by anyone, I also have to assing some probability to events like:
The world will become the ultimate Paradise / Hell iff I voice a certain sequence of characters in the next n seconds.
Maybe there’s a shy god around waiting for our prayer.
Pascal’s wager is somewhat fraught, and what you should make of it may turn on what you think about humility, religious epistemology, and the space of plausible religions. What’s so interesting about the MWI project is that it isn’t like this. It isn’t some theory concocted from nothing and assigned a probability. There’s at least some evidence that something in the ballpark of the theory is true. And it’s not easy to come up with an approximately as plausible hypothesis that suggests that the actions which might cause branchings might instead prevent them, or that we have alternative choices might lead to massive amounts of value in other ways.
If you grant that MWI is coherent, then I think you should be open to the possibility that it isn’t unique, and there are other hypotheses that suggest possible projects that are much more likely to create massive amounts of value than prevent it.
Actually, I didn’t address your argument from MWI because I suspect we couldn’t make any difference. Maybe I’m wrong (it’s way beyond my expertise), but quantum branching events would be happening all the time, so either (i) there are (or will be) infinite worlds, whatever we do, and then the problem here is more about infinite ethics than Fanaticism, or (ii) there is a limit to the number of possible branches—which I guess will (most likely) be achieved whatever we do. So it’s not clear to me that we would gain additional utility by creating more branching events.
[And yet, the modus ponens of one philosopher is the modus tollens of another one… rat/EAs have actually been discussing the potential implications of weird physics: here, here...]
However, I’m not sure the problem I identified with PW’s (i.e., take opportunity costs seriously) wouldn’t apply here, too… if we are to act conditioned on MWI being true, then we should do the same for every theory that could be true with similar odds. But how strong should this “could” be? Like “we could be living in a simulation”? And how long until you face a “basilisk”, or just someone using motivated reasoning?
As Carl Shulman points out, this might be a case of “applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequence.”