Having a superintelligence aligned to normal human values seems like a big win to me!
Not super sure what this means but the ‘normal human values’ outcome as I’ve defined it hardly contributes to EV calculations at all compared to the utopia outcome. If you disagree with this, please look at the math and let me know if I made a mistake.
Sure. The math is clearly very handwavy, but I think there are basically two issues.
Firstly, the mediocre outcome supposedly involves a superintelligence optimising for normal human values, potentially including simulating people. Yet it only involves 10 billion humans per star, less than we are currently forecast to support on a single un-optimised planet using no simulations, no AGI help and relatively primitive technology. At the very least I would think we should be having massive terraforming and efficient food production to support much higher populations, if not full dyson spheres and simulations. It’s not going to be as many people as the other scenario, but it’ll hopefully be more than Earth2100.
Secondly, I think the utilitarian outcome is over-valued on anything but purely utilitarian criteria. A world of soma-brains, without love, friendship, meaningful challenges etc. would strike many people as quite undesirable.
It seems like it would be relatively easy to make this world significantly better by conventional lights at relatively low utilitarian cost. For example, giving the simulated humans the ability to turn themselves off might incur a positive but small overhead (as presumably very few happy people would take this option), but be a significant improvement by the standards of a conventional ethics which value consent.
Not super sure what this means but the ‘normal human values’ outcome as I’ve defined it hardly contributes to EV calculations at all compared to the utopia outcome. If you disagree with this, please look at the math and let me know if I made a mistake.
Sure. The math is clearly very handwavy, but I think there are basically two issues.
Firstly, the mediocre outcome supposedly involves a superintelligence optimising for normal human values, potentially including simulating people. Yet it only involves 10 billion humans per star, less than we are currently forecast to support on a single un-optimised planet using no simulations, no AGI help and relatively primitive technology. At the very least I would think we should be having massive terraforming and efficient food production to support much higher populations, if not full dyson spheres and simulations. It’s not going to be as many people as the other scenario, but it’ll hopefully be more than Earth2100.
Secondly, I think the utilitarian outcome is over-valued on anything but purely utilitarian criteria. A world of soma-brains, without love, friendship, meaningful challenges etc. would strike many people as quite undesirable.
It seems like it would be relatively easy to make this world significantly better by conventional lights at relatively low utilitarian cost. For example, giving the simulated humans the ability to turn themselves off might incur a positive but small overhead (as presumably very few happy people would take this option), but be a significant improvement by the standards of a conventional ethics which value consent.