This is kind of like my comment at the other post, but it’s what I could think of as feedback here.
--
I liked your point IV, that inefficiency might not go away. One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now. Human values may not behave according to our concept of “reasonable rational values” over time, as they evolve. In human history, there have been impulses toward the rational and the irrational. So they might for some reason prefer something like “authentic” beef from a real / biological cow (rather than digital-world simulated beef), or wish to make some kind of sacrifice of “atoms” for some weird far future religion or quasi-religion that evolves.
--
I don’t know if my view is a mainstream one in longtermism, but I tend to think that civilization is inherently prone to fragility, and that it is uncertain that we will ever have faster-than-light travel or communications. (I haven’t thought a lot about these things, so maybe someone can show me a better way to see this.) If we don’t have FTL, then the different planets we colonize will be far apart enough to develop divergent cultures, and generally be unable to be helped by others in case of trouble. Maybe the trouble would be something like an asteroid strike. Or maybe it would be an endogenous cultural problem, like a power struggle among digital humans rippling out into the operation of the colony.
If this “trouble” caused a breakdown in civilization on some remote planet, it might impair their ability to do high tech things (like produce cultured meat). If there is some risk of this happening, they would probably try to have some kind of backup system. The backup system could be flesh-and-blood humans (more resilient in a physical environment than digital beings, even ones wedded to advanced robotics), along with a natural ecosystem and some kind of agriculture. They would have to keep the backup ecosystem and humans going throughout their history, and then if “trouble” came, the backup ecosystem and society might take over. Maybe for a while, hoping to return to high-tech digital human society, or maybe permanently, if they feel like it.
At that point, it all depends on the culture of the backup society staying true to “no factory farming” as to whether they don’t redevelop factory farming. If they do redevelop factory farming, then that would be part of the far future’s “burden of suffering” (or whatever term is better than that).
I guess one way to prevent this kind of thing from happening (maybe what longtermists already suggest), is to simply assume that some planets will break down, and try to re-colonize them if that happens, instead of expecting them to be able to deal with their own problems.
I guess if there isn’t such a thing as FTL, our ability to colonize space will be greatly limited, and so the sheer quantity of suffering possible will be a lot lower (as well as whatever good sentience gets out of existence). But, say, we only colonize 100 planets over the remainder of our existence (under no-FTL), and 5% of them re-develop factory farming, that’s still five times as many as Earth today.
This is kind of like my comment at the other post, but it’s what I could think of as feedback here.
--
I liked your point IV, that inefficiency might not go away. One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now. Human values may not behave according to our concept of “reasonable rational values” over time, as they evolve. In human history, there have been impulses toward the rational and the irrational. So they might for some reason prefer something like “authentic” beef from a real / biological cow (rather than digital-world simulated beef), or wish to make some kind of sacrifice of “atoms” for some weird far future religion or quasi-religion that evolves.
--
I don’t know if my view is a mainstream one in longtermism, but I tend to think that civilization is inherently prone to fragility, and that it is uncertain that we will ever have faster-than-light travel or communications. (I haven’t thought a lot about these things, so maybe someone can show me a better way to see this.) If we don’t have FTL, then the different planets we colonize will be far apart enough to develop divergent cultures, and generally be unable to be helped by others in case of trouble. Maybe the trouble would be something like an asteroid strike. Or maybe it would be an endogenous cultural problem, like a power struggle among digital humans rippling out into the operation of the colony.
If this “trouble” caused a breakdown in civilization on some remote planet, it might impair their ability to do high tech things (like produce cultured meat). If there is some risk of this happening, they would probably try to have some kind of backup system. The backup system could be flesh-and-blood humans (more resilient in a physical environment than digital beings, even ones wedded to advanced robotics), along with a natural ecosystem and some kind of agriculture. They would have to keep the backup ecosystem and humans going throughout their history, and then if “trouble” came, the backup ecosystem and society might take over. Maybe for a while, hoping to return to high-tech digital human society, or maybe permanently, if they feel like it.
At that point, it all depends on the culture of the backup society staying true to “no factory farming” as to whether they don’t redevelop factory farming. If they do redevelop factory farming, then that would be part of the far future’s “burden of suffering” (or whatever term is better than that).
I guess one way to prevent this kind of thing from happening (maybe what longtermists already suggest), is to simply assume that some planets will break down, and try to re-colonize them if that happens, instead of expecting them to be able to deal with their own problems.
I guess if there isn’t such a thing as FTL, our ability to colonize space will be greatly limited, and so the sheer quantity of suffering possible will be a lot lower (as well as whatever good sentience gets out of existence). But, say, we only colonize 100 planets over the remainder of our existence (under no-FTL), and 5% of them re-develop factory farming, that’s still five times as many as Earth today.