Just to clarify: is the core argument here roughly, “I’m suspicious of things that look like a Pascal’s mugging”?
If this is your argument, then I agree with you (to an extent). But reading through your examples, I feel unsure whether the Pascal’s mugging aspect is your crux, or if the weirdness of the conclusion is your crux. To test this concretely: if we were close to 100% confident that we do live in a universe where we could, e.g., produce quantum events that trigger branchings, would you want a lot of effort going into triggering such branchings? (For what it’s worth, I would want this.)
I don’t love the EA/rationalist tendency to dismiss long shots as Pascal’s muggings. Pascal’s mugging raises two separate issues: 1) what should we make of long shots with high expected value? and 2) what evidence does testimony by itself provide to highly implausible hypotheses (particularly compared with other salient possibilities). Considerations around (2.) seem sufficient to be wary of Pascal’s mugging, regardless of what you think of (1.).
I definitely think that if you were 100% confident in the simple MWI view, that should really dominate your altruistic concern. Every time the world splits, the number of pigs in gestation crates (at least) doubles! How can you not see that as something you should really care about? It might be a lonely road, but how can you pass up such high returns? (Of course it is bad for there to be pigs in gestation crates—I assume it is outweighed by good things, but those good things must be really good to outweigh such bads, so we should really want to double them. If they’re not outweighed we should really try to stop branchings)
For what it’s worth, I think I’d be inclined to think that the simple MWI should dominate our considerations even at a 1 in a thousand probability. Not sure about the 1 in a million range.
I think this post is the result of three motivations.
1.) I think the expected value of weird projects really is ludicrously high.
2.) I don’t want to work on them, or feel like I should be working on them. I get the impression that many, even most, EAs would agree.
3.) I’d bet I’m not going to win a fight about the rationality of fanaticism with Yoaav Isaacs or Hayden Wilkinson.
I definitely think that if you were 100% confident in the simple MWI view, that should really dominate your altruistic concern. Every time the world splits, the number of pigs in gestation crates (at least) doubles! How can you not see that as something you should really care about?
If you google terms like “measure,” “reality fluid” or “observer fluid” you find long discussions on Lesswrong related to how “the number of pigs in gestation crates (at least) doubles!” is probably a confused way of thinking. I don’t understand these issues at all, but there’s definitely a rabbit hole to delve into from here.
Ah, reading your post and comments more closely, I realize you’re aware of the picture probably being a different one, but, in your example, you focus on “branching doubles the things that matter” because it leads to these fanatical conclusions. That makes sense.
It has to be really small to counteract the amount of value doubling would provide.
It depends what you compare it to. Sure, if you compare a case where no branching happens at all (i.e., no MWI) and one in which branching happens and you treat it as “branching doubles the amount of stuff that matters,” then yes, there’s a wager in favor of the second.
However, if you compare “MWI where branching doubles the amount of stuff that matters” to “MWI where there’s an infinite sea of stuff and within that sea, there’s objective reality fluid or maybe everything’s subjective and something something probabilities are merely preferences over simplicity,” then it’s entirely unclear how to compare these two pictures. (Basically, the pictures don’t even agree on what it means to exist, let alone how to have impact.)
However, if you compare “MWI where branching doubles the amount of stuff that matters” to “MWI where there’s an infinite sea of stuff and within that sea, there’s objective reality fluid or maybe everything’s subjective and something something probabilities are merely preferences over simplicity,” then it’s entirely unclear how to compare these two pictures. (Basically, the pictures don’t even agree on what it means to exist, let alone how to have impact.)
I’m not sure I really understand the response. Is it that we shouldn’t compare the outcomes between, say, a Bohmian interpretation and my simplistic MW interpretation, but between my simplistic MW interpretation and a more sophisticated and plausible MW interpretation, and those comparisons aren’t straightforward?
If I’ve got you right, this seems to me to be a sensible response. But let me try to push back a little. While you’re right that it may be difficult to compare different metaphysical pictures considered as counterfactual, I’m only asking you to compare metaphysical pictures considered as actual. You know how great it actually is to suck on a lollipop? That’s how great it is to suck on a lollipop whether you’re a worm navigating through branching worlds or a quantum ghost whose reality is split across different possibilities or a plain old Bohmian hunk of meat. Suppose you’re a hunk of meat, how great would it be if you were instead a worm? Who knows and who cares! We don’t have to make decisions for metaphysical possibilities that are definitely not real and where sucking on a lollipop isn’t exactly this great.
I’m not sure I really understand the response. Is it that we shouldn’t compare the outcomes between, say, a Bohmian interpretation and my simplistic MW interpretation,
I’m not saying you can’t compare those two. You can – the simplistic MW interpretation will win because it has more impact at stake, as you say, so it wins under expected utility theory, even if you assign it low credence.
However, if you’re going down the road of “what speculative physics interpretation produces the largest utilities under expected utility theory?” you have to make sure to get the biggest one, the one where the numbers grow the most. This is Carl’s point above. My point is related, it’s that it seems more plausible for there to be infinite* branches** already if we’re considering the many worlds interpretation, as opposed to branching doubling the amount of stuff that matters.
So, comparing infinite many worlds to your many worlds with some finite but ever-growing number of branches, it seems unclear which picture to focus on as expected utility maximizers. If there’s an infinite sea of worlds/branches all at once and all our actions have infinite consequences across infinite copies of ourselves in different worlds/branches, that’s more total utility at stake than in your example, arguably. I say “arguably” because the concept of infinity is contested by some , there’s the infinitarian paralysis argument that says all actions that affect infinities of the same order are of equal value, and there are philosophical issues around what it could possibly mean for something to “exist” if there’s an infinite number of everything you can logically describe (this goes slightly further than many worlds – “if everything we can coherently describe can exist, what would it even mean for something not to exist? Can some things exist more than others?”).***
In short, the picture becomes so strange that “Which of these speculative physics scenarios should I focus on as an expected utility maximizer?” becomes more a question about the philosophy of “What do we mean by having impact in a world with infinities?” and less about straightforwardly comparing the amounts of utility at stake.
*I might butcher this (I remember there’s something about how the probabilities you get for branching “splits” may change based on arbitrary-seeming assumptions about which “basis” to use, or something like that? I found this section on Wikipedia on the preferred basis problem), but I think one argument for infinities in the MWI goes as follows. Say you have a quantum split and it’s 50-50, meaning 50% that the cat in the box is dead, 50% it’s alive. In this situation, it seems straightforward to assume that one original world splits into two daughter worlds. (Or maybe the original splits into four worlds, half with dead cat, half with an alive cat. It’s already a bit disconcerting that we maybe couldn’t distinguish between one world splitting into two and one world splitting into four?)
Now let’s assume there’s a quantum split, but the observed probabilities are something weird like 2⁄7. Easy, you say. “Two worlds with a dead cat, five worlds with an alive cat.”
Okay. But here comes the point where this logic breaks apart. Apparently, some quantum splits happen with probabilities that are irrational numbers – numbers that cannot be expressed as fractions. Wtf. :S (I remember this from somewhere in Yudkowsky’s quantum physics sequence, but here’s a discussion on a physics forum where I found the same point. I don’t know how reliable that source is.)
**[Even more speculative than the other points above.] Perhaps the concept of “branching” isn’t exactly appropriate, and there’s some equivalence between the MW quantum multiverse and a single universe with infinite spatial extent, where there are also infinite copies of you with each copy being extremely far apart from each other. (In an infinitely spatially extended universe with fixed physical laws and random initial conditions, macroscopic patterns would start to repeat themselves eventually at a far enough distance, so you’d have infinite exact copies of yourself and infinte nearly-exact copies.) Maybe what we experience/think of as “branching” is just consciousness moments hopping from one subjectively indistinguishable location to the next. This sounds wild, but it’s interesting that when you compare different ways for there to be a multiverse, the MWI and the infinitely spatially expanded universe have the same laws of physics, so there’s some reason to assume that maybe they’re two ways of describing the same thing. By contrast, inflationary cosmology, which is yet another way you can get a “multiverse,” would generate universe bubbles with different laws of physics for each bubble. (At least, that’s what I remember from the book The Hidden Reality.) (I found a paper that discusses the hypothesis that the infinitely spatially extended multiverse is the same as the quantum multiverse – it references the idea to Tegmark and Aguirre, but I first heard it by Yudkowsky. The paper claims to argue that the idea is false, for what it’s worth.)
***To elaborate on “philosophical issues around what it means for something to exist.” Consider the weird idea that there might be these infinite copies of ourselves out there, some of which should find themselves in bizarre circumstances where the world isn’t behaving predictably. (If there are infinite copies of you in total, you can’t really say “there are more copies of you in environments where the furniture doesn’t turn into broccoli the next second than there are copies in environments where it does.” After all, there are infinite copies in both types of environment!) So, this raises questions like “Why do things generally appear lawful/predictable to us?” and “How much should we care about copies of ourselves that find themselves in worlds where the furniture turns into broccoli?” So, people speculate whether there’s some mysterious “reality fluid” that could be concentrated in worlds that are simpler and therefore they appear more normal/predictable to us. (One way to maybe think of this is that the universe is a giant automaton that’s being run, and existence corresponds not just whether there’s a mathematical description of the patterns that make up you and your environment, but also somehow of “actually being run” or “(relative?) run-time.”) Alternatively, there’s a philosophical view that we may call “existence anti-realism.” We start by noting that the concept of “existence” looks suspicious. David Chalmers coined the term bedrock concepts for concepts that we cannot re-formulate in non-question-begging terminology (terminology from another domain). So these concepts are claimed to be “irreducible.” Concepts like “moral” or “conscious” are other contenders for bedrock concepts. Interestingly enough, when we investigate purported bedrock concepts, many of them turn out to be reducible after all (e.g., almost all philosophers think concepts like “beautiful” are reducible; many philosophers think moral concepts are reducible; a bunch of philosophers are consciousness anti-realists, etc.) See this typology of bedrock concepts I made, where existence anti-realism is the craziest tier. It takes the sort of reasoning that is common on Lesswrong the furthest you can take it. It claims that whether something “exists”is a bit of a confused question, that our answers to it depend on how our minds are built, like what priors we have over worlds or what sort of configurations we care about. I don’t understand it, really. But here’s a confusing dialogue on the topic.
Thanks for clarifying! I think I get what you’re saying. This certainly is a rabbit hole. But to bring it back to the points that I initially tried to make, I’m kind of struggling to figure out what the upshot would be. The following seem to me to be possible take-aways:
1.) While the considerations in the ballpark of what I’ve presented do have counterintuitive implications (if we’re spawning infinite divisions every second, that must have some hefty implications for how we should and shouldn’t act, mustn’t it?), fanaticism per se doesn’t have any weird implications for how we should be behaving because it is fairly likely that we’re already producing infinite amounts of value and so long shots don’t enter into it.
2.) Fanaticism per se doesn’t have any weird implications for how we should be behaving, because it is fairly likely that the best ways to produce stupendous amounts of value happen to align closely with what commonsense EA suggests we should be doing anyway. (I like Michael St. Jules approach to this that says we should promote the long-term future of humanity so we have the chance to research possible transfinite amounts of value.)
3.) These issues are so complicated that there is no way to know what to do if we’re going fanatical, so even if trying to create branches appears to have more expected utility than ordinary altruistic actions, we should stick to the ordinary altruistic actions to avoid opening up that can of worms.
I definitely think that if you were 100% confident in the simple MWI view, that should really dominate your altruistic concern.
TBH I don’t think this makes sense. Every decision you make in this scenario, including the one to promote or stop branching, would be a result of some quantum processes (because everything is a quantum process), so the universe where you decided to do it would be complemented by one where you didn’t. None of your decisions have any effect on the amount of suffering etc., if it’s taken as a sum over universes.
Just to clarify: is the core argument here roughly, “I’m suspicious of things that look like a Pascal’s mugging”?
If this is your argument, then I agree with you (to an extent). But reading through your examples, I feel unsure whether the Pascal’s mugging aspect is your crux, or if the weirdness of the conclusion is your crux. To test this concretely: if we were close to 100% confident that we do live in a universe where we could, e.g., produce quantum events that trigger branchings, would you want a lot of effort going into triggering such branchings? (For what it’s worth, I would want this.)
I don’t love the EA/rationalist tendency to dismiss long shots as Pascal’s muggings. Pascal’s mugging raises two separate issues: 1) what should we make of long shots with high expected value? and 2) what evidence does testimony by itself provide to highly implausible hypotheses (particularly compared with other salient possibilities). Considerations around (2.) seem sufficient to be wary of Pascal’s mugging, regardless of what you think of (1.).
I definitely think that if you were 100% confident in the simple MWI view, that should really dominate your altruistic concern. Every time the world splits, the number of pigs in gestation crates (at least) doubles! How can you not see that as something you should really care about? It might be a lonely road, but how can you pass up such high returns? (Of course it is bad for there to be pigs in gestation crates—I assume it is outweighed by good things, but those good things must be really good to outweigh such bads, so we should really want to double them. If they’re not outweighed we should really try to stop branchings)
For what it’s worth, I think I’d be inclined to think that the simple MWI should dominate our considerations even at a 1 in a thousand probability. Not sure about the 1 in a million range.
I think this post is the result of three motivations.
1.) I think the expected value of weird projects really is ludicrously high. 2.) I don’t want to work on them, or feel like I should be working on them. I get the impression that many, even most, EAs would agree. 3.) I’d bet I’m not going to win a fight about the rationality of fanaticism with Yoaav Isaacs or Hayden Wilkinson.
If you google terms like “measure,” “reality fluid” or “observer fluid” you find long discussions on Lesswrong related to how “the number of pigs in gestation crates (at least) doubles!” is probably a confused way of thinking. I don’t understand these issues at all, but there’s definitely a rabbit hole to delve into from here.
Sure, but how small is the probability that it isn’t? It has to be really small to counteract the amount of value doubling would provide.
Ah, reading your post and comments more closely, I realize you’re aware of the picture probably being a different one, but, in your example, you focus on “branching doubles the things that matter” because it leads to these fanatical conclusions. That makes sense.
It depends what you compare it to. Sure, if you compare a case where no branching happens at all (i.e., no MWI) and one in which branching happens and you treat it as “branching doubles the amount of stuff that matters,” then yes, there’s a wager in favor of the second.
However, if you compare “MWI where branching doubles the amount of stuff that matters” to “MWI where there’s an infinite sea of stuff and within that sea, there’s objective reality fluid or maybe everything’s subjective and something something probabilities are merely preferences over simplicity,” then it’s entirely unclear how to compare these two pictures. (Basically, the pictures don’t even agree on what it means to exist, let alone how to have impact.)
I’m not sure I really understand the response. Is it that we shouldn’t compare the outcomes between, say, a Bohmian interpretation and my simplistic MW interpretation, but between my simplistic MW interpretation and a more sophisticated and plausible MW interpretation, and those comparisons aren’t straightforward?
If I’ve got you right, this seems to me to be a sensible response. But let me try to push back a little. While you’re right that it may be difficult to compare different metaphysical pictures considered as counterfactual, I’m only asking you to compare metaphysical pictures considered as actual. You know how great it actually is to suck on a lollipop? That’s how great it is to suck on a lollipop whether you’re a worm navigating through branching worlds or a quantum ghost whose reality is split across different possibilities or a plain old Bohmian hunk of meat. Suppose you’re a hunk of meat, how great would it be if you were instead a worm? Who knows and who cares! We don’t have to make decisions for metaphysical possibilities that are definitely not real and where sucking on a lollipop isn’t exactly this great.
I’m not saying you can’t compare those two. You can – the simplistic MW interpretation will win because it has more impact at stake, as you say, so it wins under expected utility theory, even if you assign it low credence.
However, if you’re going down the road of “what speculative physics interpretation produces the largest utilities under expected utility theory?” you have to make sure to get the biggest one, the one where the numbers grow the most. This is Carl’s point above. My point is related, it’s that it seems more plausible for there to be infinite* branches** already if we’re considering the many worlds interpretation, as opposed to branching doubling the amount of stuff that matters.
So, comparing infinite many worlds to your many worlds with some finite but ever-growing number of branches, it seems unclear which picture to focus on as expected utility maximizers. If there’s an infinite sea of worlds/branches all at once and all our actions have infinite consequences across infinite copies of ourselves in different worlds/branches, that’s more total utility at stake than in your example, arguably. I say “arguably” because the concept of infinity is contested by some , there’s the infinitarian paralysis argument that says all actions that affect infinities of the same order are of equal value, and there are philosophical issues around what it could possibly mean for something to “exist” if there’s an infinite number of everything you can logically describe (this goes slightly further than many worlds – “if everything we can coherently describe can exist, what would it even mean for something not to exist? Can some things exist more than others?”).***
In short, the picture becomes so strange that “Which of these speculative physics scenarios should I focus on as an expected utility maximizer?” becomes more a question about the philosophy of “What do we mean by having impact in a world with infinities?” and less about straightforwardly comparing the amounts of utility at stake.
*I might butcher this (I remember there’s something about how the probabilities you get for branching “splits” may change based on arbitrary-seeming assumptions about which “basis” to use, or something like that? I found this section on Wikipedia on the preferred basis problem), but I think one argument for infinities in the MWI goes as follows. Say you have a quantum split and it’s 50-50, meaning 50% that the cat in the box is dead, 50% it’s alive. In this situation, it seems straightforward to assume that one original world splits into two daughter worlds. (Or maybe the original splits into four worlds, half with dead cat, half with an alive cat. It’s already a bit disconcerting that we maybe couldn’t distinguish between one world splitting into two and one world splitting into four?)
Now let’s assume there’s a quantum split, but the observed probabilities are something weird like 2⁄7. Easy, you say. “Two worlds with a dead cat, five worlds with an alive cat.”
Okay. But here comes the point where this logic breaks apart. Apparently, some quantum splits happen with probabilities that are irrational numbers – numbers that cannot be expressed as fractions. Wtf. :S (I remember this from somewhere in Yudkowsky’s quantum physics sequence, but here’s a discussion on a physics forum where I found the same point. I don’t know how reliable that source is.)
**[Even more speculative than the other points above.] Perhaps the concept of “branching” isn’t exactly appropriate, and there’s some equivalence between the MW quantum multiverse and a single universe with infinite spatial extent, where there are also infinite copies of you with each copy being extremely far apart from each other. (In an infinitely spatially extended universe with fixed physical laws and random initial conditions, macroscopic patterns would start to repeat themselves eventually at a far enough distance, so you’d have infinite exact copies of yourself and infinte nearly-exact copies.) Maybe what we experience/think of as “branching” is just consciousness moments hopping from one subjectively indistinguishable location to the next. This sounds wild, but it’s interesting that when you compare different ways for there to be a multiverse, the MWI and the infinitely spatially expanded universe have the same laws of physics, so there’s some reason to assume that maybe they’re two ways of describing the same thing. By contrast, inflationary cosmology, which is yet another way you can get a “multiverse,” would generate universe bubbles with different laws of physics for each bubble. (At least, that’s what I remember from the book The Hidden Reality.) (I found a paper that discusses the hypothesis that the infinitely spatially extended multiverse is the same as the quantum multiverse – it references the idea to Tegmark and Aguirre, but I first heard it by Yudkowsky. The paper claims to argue that the idea is false, for what it’s worth.)
***To elaborate on “philosophical issues around what it means for something to exist.” Consider the weird idea that there might be these infinite copies of ourselves out there, some of which should find themselves in bizarre circumstances where the world isn’t behaving predictably. (If there are infinite copies of you in total, you can’t really say “there are more copies of you in environments where the furniture doesn’t turn into broccoli the next second than there are copies in environments where it does.” After all, there are infinite copies in both types of environment!) So, this raises questions like “Why do things generally appear lawful/predictable to us?” and “How much should we care about copies of ourselves that find themselves in worlds where the furniture turns into broccoli?” So, people speculate whether there’s some mysterious “reality fluid” that could be concentrated in worlds that are simpler and therefore they appear more normal/predictable to us. (One way to maybe think of this is that the universe is a giant automaton that’s being run, and existence corresponds not just whether there’s a mathematical description of the patterns that make up you and your environment, but also somehow of “actually being run” or “(relative?) run-time.”) Alternatively, there’s a philosophical view that we may call “existence anti-realism.” We start by noting that the concept of “existence” looks suspicious. David Chalmers coined the term bedrock concepts for concepts that we cannot re-formulate in non-question-begging terminology (terminology from another domain). So these concepts are claimed to be “irreducible.” Concepts like “moral” or “conscious” are other contenders for bedrock concepts. Interestingly enough, when we investigate purported bedrock concepts, many of them turn out to be reducible after all (e.g., almost all philosophers think concepts like “beautiful” are reducible; many philosophers think moral concepts are reducible; a bunch of philosophers are consciousness anti-realists, etc.) See this typology of bedrock concepts I made, where existence anti-realism is the craziest tier. It takes the sort of reasoning that is common on Lesswrong the furthest you can take it. It claims that whether something “exists”is a bit of a confused question, that our answers to it depend on how our minds are built, like what priors we have over worlds or what sort of configurations we care about. I don’t understand it, really. But here’s a confusing dialogue on the topic.
As I said in my earlier comment, it’s a rabbit hole.
Thanks for clarifying! I think I get what you’re saying. This certainly is a rabbit hole. But to bring it back to the points that I initially tried to make, I’m kind of struggling to figure out what the upshot would be. The following seem to me to be possible take-aways:
1.) While the considerations in the ballpark of what I’ve presented do have counterintuitive implications (if we’re spawning infinite divisions every second, that must have some hefty implications for how we should and shouldn’t act, mustn’t it?), fanaticism per se doesn’t have any weird implications for how we should be behaving because it is fairly likely that we’re already producing infinite amounts of value and so long shots don’t enter into it.
2.) Fanaticism per se doesn’t have any weird implications for how we should be behaving, because it is fairly likely that the best ways to produce stupendous amounts of value happen to align closely with what commonsense EA suggests we should be doing anyway. (I like Michael St. Jules approach to this that says we should promote the long-term future of humanity so we have the chance to research possible transfinite amounts of value.)
3.) These issues are so complicated that there is no way to know what to do if we’re going fanatical, so even if trying to create branches appears to have more expected utility than ordinary altruistic actions, we should stick to the ordinary altruistic actions to avoid opening up that can of worms.
TBH I don’t think this makes sense. Every decision you make in this scenario, including the one to promote or stop branching, would be a result of some quantum processes (because everything is a quantum process), so the universe where you decided to do it would be complemented by one where you didn’t. None of your decisions have any effect on the amount of suffering etc., if it’s taken as a sum over universes.