Even if you try to follow an unbounded utility function (which has deep mathematical problems, but set those aside for now) these don’t follow.
Generally the claims here fall prey to the fallacy of unevenly applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequences.
For instance, in an infinite world (including infinities creating by infinite branching faster than you can control) with infinite copies of you, any decision, e.g. eating an apple, has infinite consequences on decision theories that account for the fact that all must make the same (distribution of ) decisions . If perpetual motion machines or hypercomputation or baby universes are possible, then making a much more advanced and stable civilization is far more promising for realizing things related to that then giving in to religions where you have very high likelihood ratios that they don’t feed into cosmic consequences.
Any plan for infinite/cosmic impact that has an extremely foolish step in it (like Pascal’s Mugging) is going to be dominated by less foolish plans.
There will still be implications of unbounded utility functions that are weird and terrible by the standards of other values, but they would have to follow from the most sophisticated analysis, and wouldn’t have foolish instrumental irrationalities or uneven calculation of possible consequences.
A lot of these scenarios are analogous to someone caricaturing the case for aid to the global poor as implying that people should give away all of the food they have (sending it by FedEx) to famine-struck regions, until they themselves starve to death. Yes, cosmopolitan concern for the poor can elicit huge sacrifices of other values like personal wellbeing or community loyalty, but that hypothetical is obviously wrong on its own terms as an implication.
Generally the claims here fall prey to the fallacy of unevenly applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequences.
Could you be more specific about the claims that I make that involve this fallacy? This sounds to me like a general critique of Pascal’s mugging, which I don’t think fits the case that I’ve made. For instance, I suggested that the simple MWI has a probability ~ 11018 and would mean that it is trivially possible if true to generate 21000v in value, where v is all the value currently in the world. The expected value of doing things that might cause 1000 successive branchings is ~10283v where v is all the value in the world. Do you think that there is a higher probability way to generate a similar amount of value?
then making a much more advanced and stable civilization is far more promising for realizing things related to that.
I suppose your point might be something like, absurdist research is promising, and that is precisely why we need humanity to spread throughout the stars. Just think of how many zany long-shot possibilities we’ll get to pursue! If so, that sounds fair to me. Maybe that is what the fanatic would want. It’s not obvious that we should focus on saving humanity for now and leave the absurd research for later. Asymmetries in time might make us much more powerful now than later, but I can see why you might think that. I find it a rather odd motivation though.
Here’s one application. You posit a divergent ‘exponentially splitting’ path for a universe. There are better versions of this story with baby universes (which work better on their own terms than counting branches equally irrespective of measure, which assigns ~0 probability to our observations).
But in any case you get some kind of infinite exponentially growing branching tree ahead of you regardless. You then want to say that having two of these trees ahead of you (or a faster split rate) is better. Indeed, on this line you’re going to say that something that splits twice as fast is so much more valuable as to drive the first tree to~nothing. Our world very much looks not-optimized for that, but it could be, for instance, a simulation or byproduct of such a tree, with a constant relationship of such simulations to the faster-expanding tree (and any action we take is replicated across the endless identical copies of us therein).
Or you can say we’re part of a set of parallel universes that don’t split but which is as ‘large’ as the infinite limit of the fastest splitting process.
I suppose your point might be something like, absurdist research is promising, and that is precisely why we need humanity to spread throughout the stars. Just think of how many zany long-shot possibilities we’ll get to pursue! If so, that sounds fair to me. Maybe that is what the fanatic would want. It’s not obvious that we should focus on saving humanity for now and leave the absurd research for later. Asymmetries in time might make us much more powerful now than later, but I can see why you might think that. I find it a rather odd motivation though.
Personally, I think we should have a bounded social welfare function (and can’t actually have an unbounded one), but place finite utility on doing a good job picking low-hanging fruit on these infinite scope possibilities. But that’s separate from the questions of what an efficient resource expenditures on those possibilities looks like.
I give the MWI a probability of greater than 0.5 of being correct, but as far as I can tell, there isn’t any way to generate more value out of it. There isn’t any way to create more branches. You only can choose to be intentional and explicit about creating new identifiable branches, but that doesn’t mean that you’ve created more branches. The branching happens regardless of human action.
Someone with a better understanding of this please weigh in.
Pursuing (or influencing others to pursue) larger cardinal numbers of value, e.g. creating or preventing the existence of ℵ5 possible beings, seems sufficiently neglected relative to extinction risk reduction and the chances of value-lock-in are high enough that increasing or decreasing the expected amount of resources used to generate such higher cardinals of (dis)value or improving their quality conditional on an advanced stable civilization looks at least roughly as promising as extinction risk reduction for a scope-sensitive expected value maximizer. (However, plausibly you should just be indifferent to everything, if you aggregate value before taking differences rather than after.)
Even if you try to follow an unbounded utility function (which has deep mathematical problems, but set those aside for now) these don’t follow.
Generally the claims here fall prey to the fallacy of unevenly applying the possibility of large consequences to some acts where you highlight them and not to others, such that you wind up neglecting more likely paths to large consequences.
For instance, in an infinite world (including infinities creating by infinite branching faster than you can control) with infinite copies of you, any decision, e.g. eating an apple, has infinite consequences on decision theories that account for the fact that all must make the same (distribution of ) decisions . If perpetual motion machines or hypercomputation or baby universes are possible, then making a much more advanced and stable civilization is far more promising for realizing things related to that then giving in to religions where you have very high likelihood ratios that they don’t feed into cosmic consequences.
Any plan for infinite/cosmic impact that has an extremely foolish step in it (like Pascal’s Mugging) is going to be dominated by less foolish plans.
There will still be implications of unbounded utility functions that are weird and terrible by the standards of other values, but they would have to follow from the most sophisticated analysis, and wouldn’t have foolish instrumental irrationalities or uneven calculation of possible consequences.
A lot of these scenarios are analogous to someone caricaturing the case for aid to the global poor as implying that people should give away all of the food they have (sending it by FedEx) to famine-struck regions, until they themselves starve to death. Yes, cosmopolitan concern for the poor can elicit huge sacrifices of other values like personal wellbeing or community loyalty, but that hypothetical is obviously wrong on its own terms as an implication.
Could you be more specific about the claims that I make that involve this fallacy? This sounds to me like a general critique of Pascal’s mugging, which I don’t think fits the case that I’ve made. For instance, I suggested that the simple MWI has a probability ~ 11018 and would mean that it is trivially possible if true to generate 21000v in value, where v is all the value currently in the world. The expected value of doing things that might cause 1000 successive branchings is ~10283v where v is all the value in the world. Do you think that there is a higher probability way to generate a similar amount of value?
I suppose your point might be something like, absurdist research is promising, and that is precisely why we need humanity to spread throughout the stars. Just think of how many zany long-shot possibilities we’ll get to pursue! If so, that sounds fair to me. Maybe that is what the fanatic would want. It’s not obvious that we should focus on saving humanity for now and leave the absurd research for later. Asymmetries in time might make us much more powerful now than later, but I can see why you might think that. I find it a rather odd motivation though.
Here’s one application. You posit a divergent ‘exponentially splitting’ path for a universe. There are better versions of this story with baby universes (which work better on their own terms than counting branches equally irrespective of measure, which assigns ~0 probability to our observations).
But in any case you get some kind of infinite exponentially growing branching tree ahead of you regardless. You then want to say that having two of these trees ahead of you (or a faster split rate) is better. Indeed, on this line you’re going to say that something that splits twice as fast is so much more valuable as to drive the first tree to~nothing. Our world very much looks not-optimized for that, but it could be, for instance, a simulation or byproduct of such a tree, with a constant relationship of such simulations to the faster-expanding tree (and any action we take is replicated across the endless identical copies of us therein).
Or you can say we’re part of a set of parallel universes that don’t split but which is as ‘large’ as the infinite limit of the fastest splitting process.
Personally, I think we should have a bounded social welfare function (and can’t actually have an unbounded one), but place finite utility on doing a good job picking low-hanging fruit on these infinite scope possibilities. But that’s separate from the questions of what an efficient resource expenditures on those possibilities looks like.
I give the MWI a probability of greater than 0.5 of being correct, but as far as I can tell, there isn’t any way to generate more value out of it. There isn’t any way to create more branches. You only can choose to be intentional and explicit about creating new identifiable branches, but that doesn’t mean that you’ve created more branches. The branching happens regardless of human action.
Someone with a better understanding of this please weigh in.
(Edited to remove some bits.)
Pursuing (or influencing others to pursue) larger cardinal numbers of value, e.g. creating or preventing the existence of ℵ5 possible beings, seems sufficiently neglected relative to extinction risk reduction and the chances of value-lock-in are high enough that increasing or decreasing the expected amount of resources used to generate such higher cardinals of (dis)value or improving their quality conditional on an advanced stable civilization looks at least roughly as promising as extinction risk reduction for a scope-sensitive expected value maximizer. (However, plausibly you should just be indifferent to everything, if you aggregate value before taking differences rather than after.)