More intense lives will be able to be engineered, on expectation, for a longer time period, at a higher density, and across a larger space, on expectation, through biological augmentation or virtual reality, than through nature. So
trrraforming is a red herring here, because most (approximately all) human and animal experience will be engineered by biotech in the long run.
most (approximately all) human and animal experience will be engineered by biotech in the long run
You’re making a very strong claim about something that will happen in the future that has never happened in the past based on speculation about what’s technologically feasible and on what the people with power will want to do. Maybe you’re right but you seem really overconfident here.
I mean “on expectation” as in it’s at least slightly more likely than not, based on what little.wr currently know, but I’m still very interested in new evidence.
Do you think it is likely that humans will run sentient simulations in the future? It could be that wild animal brain simulations and “suffering subroutines” dominate future expected utility calculations.
Sure, any subroutines could exist in the future. In artificial worlds, the range of possible experience should be much larger than presently. The incentives for researching and developing entertainment should by much larger than for engineering psychological harm. Generalisations from the natural world wouldn’t necessarily follow to simulations, but on the inside view, net flourishing is expected.
More intense lives will be able to be engineered, on expectation, for a longer time period, at a higher density, and across a larger space, on expectation, through biological augmentation or virtual reality, than through nature. So trrraforming is a red herring here, because most (approximately all) human and animal experience will be engineered by biotech in the long run.
Arguments not downvotes, please!
You’re making a very strong claim about something that will happen in the future that has never happened in the past based on speculation about what’s technologically feasible and on what the people with power will want to do. Maybe you’re right but you seem really overconfident here.
I mean “on expectation” as in it’s at least slightly more likely than not, based on what little.wr currently know, but I’m still very interested in new evidence.
Do you think it is likely that humans will run sentient simulations in the future? It could be that wild animal brain simulations and “suffering subroutines” dominate future expected utility calculations.
Sure, any subroutines could exist in the future. In artificial worlds, the range of possible experience should be much larger than presently. The incentives for researching and developing entertainment should by much larger than for engineering psychological harm. Generalisations from the natural world wouldn’t necessarily follow to simulations, but on the inside view, net flourishing is expected.