Thanks for writing this — in general I am pro thinking more about what MWI could entail!
But I think it’s worth being clear about what this kind of intervention would achieve. Importantly (as I’m sure you’re aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here), or decrease the overall (subjective) chance of existential catastrophe.
But it could increase the chance of something like “at least [some small fraction]% of’branches’ survive catastrophe”, or at the extreme “at least one ‘branch’ survives catastrophy”. If you have some special reason to care about this, then this could be good.
For instance, suppose you thought whether or not to accelerate AI capabilities research in the US is likely to have a very large impact on the chance of existential catastrophe, but you’re unsure about the sign. To use some ridiculous play numbers: maybe you’re split 50-50 between thinking investing in AI raises p(catastrophe) to 98% and 0 otherwise, or investing in AI lowers p(catastrophe) to 0 and 98% otherwise. If you flip a ‘classical’ coin, the expected chance of catastrophe is 49%, but you can’t be sure we’ll end up in a world where we survive. If you flip a ‘quantum’ coin and split into two ‘branches’ with equal measure, you can be sure that one world will survive (and another will encounter catastrophe with 98% likelihood). So you’ve increased the chance that ‘at least 40% of the future worlds will survive’ from 50% to 100%.[1]
In general you’ve moving from more overall uncertainty about whether things will turn out good or bad, to more certainty that things will turn out in some mixture of good and bad.
Maybe that sounds good, if for instance you think the mere fact that something exists is good in itself (you might have in mind that if someone perfectly duplicated the Mona Lisa, the duplicate would be worth less than the original, and that the analogy carries).[2]
But I also think it is astronomically unlikely that a world splitting exercise like this would make the difference[3] between ‘at least one branch survives’ and ‘no branches survive’. The reason is just that there are so, so many branches, such that —
It just seems very likely that at least some branches survive anyway;
Even if you thought there was a decent chance that no branches survive without doing the world splitting, then you should have such a wide uncertainty over the number of branches you expect to survive that (I claim) your odds on something like [at least one branch will survive if we do split worlds, and no branches will survive if we don’t] should be very very low.[4] And I think this still goes through even if you split the world many times.
But also note that by splitting worlds you’re also increasing the chance that ‘at least 40% of the future worlds will encounter catastrophe’ from 48% to 99%. And maybe there’s a symmetry, where if you think there’s something intrinsically good about the fact that a good thing occurs at all, then you should think there’s something intrinsically bad about the fact that a bad thing occurs at all, and I count existential catastrophe as bad!
In general you’ve moving from more overall uncertainty about whether things will turn out good or bad, to more certainty that things will turn out in some mixture of good and bad.
Yes, I agree
Maybe that sounds good, if for instance you think the mere fact that something exists is good in itself (you might have in mind that if someone perfectly duplicated the Mona Lisa, the duplicate would be worth less than the original, and that the analogy carries)
This analogy isn’t perfect. I’d prefer the analogy that, in a trolley problem in which the hostages were your family, one may care some small amount about ensuring at least one family-member survives (in opposition/contrast to maximizing the number of family members which survive)
But I also think it is astronomically unlikely that a world splitting exercise like this would make the difference[3] between ‘at least one branch survives’ and ‘no branches survive’.
Yeah, when thinking more about this, this does seem like the strongest objection, and here is where I’d like an actual physicist to chip in. If I had to defend why that is wrong, I’d say something like:
Yeah, but because quantum effects don’t really interact that much with macroscopic effects all that much, this huge number of worlds are all incredibly correlated
as you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Like, when I imagine a world in which quantum effects prevent an x-risk (AGI for illustration purposes) in the absence of human nudging, I imagine something like: quantum effects become large enough that the first few researchers who come up with how to program an AGI mysteriously die from aneurisms until the world notices and creates a world-government to prevent AGI research (?)
I notice that I don’t actually think this is the scenario that requires the least quantum intervention, but I think that the general point kind of stands
As you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Seems to me that pretty much whenever anyone would actually considering ‘splitting the timeline’ on some big uncertain question, then even if they didn’t decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions?
But this requires a quantum event/events to influences the decision, which seems more and more unlikely the closer you are to the decision. Though per this comment, you could also imagine that different people were born and would probably make different decisions.
I think I like the analogy below of preventing extinction being (in a world which goes extinct) somewhat akin to avoiding the industrial revolution/discovery of steam engines, or other efficient sources of energy. If you are only allowed to do it with quantum effects, your world becomes very weird.
Importantly (as I’m sure you’re aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here)
What makes you think that? So long as value can change with the distribution of events across branches (as perhaps with the Mona Lisa) the expected value of the future could easily change.
Thanks for writing this — in general I am pro thinking more about what MWI could entail!
But I think it’s worth being clear about what this kind of intervention would achieve. Importantly (as I’m sure you’re aware), no amount of world slicing is going to increase the expected value of the future (roughly all the branches from here), or decrease the overall (subjective) chance of existential catastrophe.
But it could increase the chance of something like “at least [some small fraction]% of’branches’ survive catastrophe”, or at the extreme “at least one ‘branch’ survives catastrophy”. If you have some special reason to care about this, then this could be good.
For instance, suppose you thought whether or not to accelerate AI capabilities research in the US is likely to have a very large impact on the chance of existential catastrophe, but you’re unsure about the sign. To use some ridiculous play numbers: maybe you’re split 50-50 between thinking investing in AI raises p(catastrophe) to 98% and 0 otherwise, or investing in AI lowers p(catastrophe) to 0 and 98% otherwise. If you flip a ‘classical’ coin, the expected chance of catastrophe is 49%, but you can’t be sure we’ll end up in a world where we survive. If you flip a ‘quantum’ coin and split into two ‘branches’ with equal measure, you can be sure that one world will survive (and another will encounter catastrophe with 98% likelihood). So you’ve increased the chance that ‘at least 40% of the future worlds will survive’ from 50% to 100%.[1]
In general you’ve moving from more overall uncertainty about whether things will turn out good or bad, to more certainty that things will turn out in some mixture of good and bad.
Maybe that sounds good, if for instance you think the mere fact that something exists is good in itself (you might have in mind that if someone perfectly duplicated the Mona Lisa, the duplicate would be worth less than the original, and that the analogy carries).[2]
But I also think it is astronomically unlikely that a world splitting exercise like this would make the difference[3] between ‘at least one branch survives’ and ‘no branches survive’. The reason is just that there are so, so many branches, such that —
It just seems very likely that at least some branches survive anyway;
Even if you thought there was a decent chance that no branches survive without doing the world splitting, then you should have such a wide uncertainty over the number of branches you expect to survive that (I claim) your odds on something like [at least one branch will survive if we do split worlds, and no branches will survive if we don’t] should be very very low.[4] And I think this still goes through even if you split the world many times.
It’s like choosing between [putting $50 on black and 50% on red] at a roulette table, and [putting $100 on red].
But also note that by splitting worlds you’re also increasing the chance that ‘at least 40% of the future worlds will encounter catastrophe’ from 48% to 99%. And maybe there’s a symmetry, where if you think there’s something intrinsically good about the fact that a good thing occurs at all, then you should think there’s something intrinsically bad about the fact that a bad thing occurs at all, and I count existential catastrophe as bad!
Note this is not the same as claining it’s highly unlikely that this intervention will increase the chance of surviving in at least one world.
Because you are making at most a factor-of-two difference by ‘splitting’ the world once.
Yes, I agree
This analogy isn’t perfect. I’d prefer the analogy that, in a trolley problem in which the hostages were your family, one may care some small amount about ensuring at least one family-member survives (in opposition/contrast to maximizing the number of family members which survive)
Yeah, when thinking more about this, this does seem like the strongest objection, and here is where I’d like an actual physicist to chip in. If I had to defend why that is wrong, I’d say something like:
Yeah, but because quantum effects don’t really interact that much with macroscopic effects all that much, this huge number of worlds are all incredibly correlated
as you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Like, when I imagine a world in which quantum effects prevent an x-risk (AGI for illustration purposes) in the absence of human nudging, I imagine something like: quantum effects become large enough that the first few researchers who come up with how to program an AGI mysteriously die from aneurisms until the world notices and creates a world-government to prevent AGI research (?)
I notice that I don’t actually think this is the scenario that requires the least quantum intervention, but I think that the general point kind of stands
Seems to me that pretty much whenever anyone would actually considering ‘splitting the timeline’ on some big uncertain question, then even if they didn’t decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions?
But this requires a quantum event/events to influences the decision, which seems more and more unlikely the closer you are to the decision. Though per this comment, you could also imagine that different people were born and would probably make different decisions.
I think I like the analogy below of preventing extinction being (in a world which goes extinct) somewhat akin to avoiding the industrial revolution/discovery of steam engines, or other efficient sources of energy. If you are only allowed to do it with quantum effects, your world becomes very weird.
What makes you think that? So long as value can change with the distribution of events across branches (as perhaps with the Mona Lisa) the expected value of the future could easily change.