In general you’ve moving from more overall uncertainty about whether things will turn out good or bad, to more certainty that things will turn out in some mixture of good and bad.
Yes, I agree
Maybe that sounds good, if for instance you think the mere fact that something exists is good in itself (you might have in mind that if someone perfectly duplicated the Mona Lisa, the duplicate would be worth less than the original, and that the analogy carries)
This analogy isn’t perfect. I’d prefer the analogy that, in a trolley problem in which the hostages were your family, one may care some small amount about ensuring at least one family-member survives (in opposition/contrast to maximizing the number of family members which survive)
But I also think it is astronomically unlikely that a world splitting exercise like this would make the difference[3] between ‘at least one branch survives’ and ‘no branches survive’.
Yeah, when thinking more about this, this does seem like the strongest objection, and here is where I’d like an actual physicist to chip in. If I had to defend why that is wrong, I’d say something like:
Yeah, but because quantum effects don’t really interact that much with macroscopic effects all that much, this huge number of worlds are all incredibly correlated
as you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Like, when I imagine a world in which quantum effects prevent an x-risk (AGI for illustration purposes) in the absence of human nudging, I imagine something like: quantum effects become large enough that the first few researchers who come up with how to program an AGI mysteriously die from aneurisms until the world notices and creates a world-government to prevent AGI research (?)
I notice that I don’t actually think this is the scenario that requires the least quantum intervention, but I think that the general point kind of stands
As you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Seems to me that pretty much whenever anyone would actually considering ‘splitting the timeline’ on some big uncertain question, then even if they didn’t decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions?
But this requires a quantum event/events to influences the decision, which seems more and more unlikely the closer you are to the decision. Though per this comment, you could also imagine that different people were born and would probably make different decisions.
I think I like the analogy below of preventing extinction being (in a world which goes extinct) somewhat akin to avoiding the industrial revolution/discovery of steam engines, or other efficient sources of energy. If you are only allowed to do it with quantum effects, your world becomes very weird.
Yes, I agree
This analogy isn’t perfect. I’d prefer the analogy that, in a trolley problem in which the hostages were your family, one may care some small amount about ensuring at least one family-member survives (in opposition/contrast to maximizing the number of family members which survive)
Yeah, when thinking more about this, this does seem like the strongest objection, and here is where I’d like an actual physicist to chip in. If I had to defend why that is wrong, I’d say something like:
Yeah, but because quantum effects don’t really interact that much with macroscopic effects all that much, this huge number of worlds are all incredibly correlated
as you go into unlikelier and unlikelier worlds, you also go into weirder and weirder worlds.
Like, when I imagine a world in which quantum effects prevent an x-risk (AGI for illustration purposes) in the absence of human nudging, I imagine something like: quantum effects become large enough that the first few researchers who come up with how to program an AGI mysteriously die from aneurisms until the world notices and creates a world-government to prevent AGI research (?)
I notice that I don’t actually think this is the scenario that requires the least quantum intervention, but I think that the general point kind of stands
Seems to me that pretty much whenever anyone would actually considering ‘splitting the timeline’ on some big uncertain question, then even if they didn’t decide to split the timeline, there are still going to be fairly non-weird worlds in which they make both decisions?
But this requires a quantum event/events to influences the decision, which seems more and more unlikely the closer you are to the decision. Though per this comment, you could also imagine that different people were born and would probably make different decisions.
I think I like the analogy below of preventing extinction being (in a world which goes extinct) somewhat akin to avoiding the industrial revolution/discovery of steam engines, or other efficient sources of energy. If you are only allowed to do it with quantum effects, your world becomes very weird.