This is super speculative of course, but if the future involves competition between different civilizations / value systems, do you think having to devote say 96% (i.e. 24⁄25) of a civilization’s storage capacity to redundancy would significantly weaken its fitness? I guess it would depend on what fraction of total resources are spent on information storage...?
Also, by the same token, even if there is a “singleton” at some relatively early time, mightn’t it prefer to take on a non-negligible risk of value drift later in time if it means being able to, say, 10x its effective storage capacity in the meantime?
(I know your 24⁄25 was a conservative estimate in some ways; on the other hand it only addresses the first billion years, which is arguably only a small fraction of the possible future, so hopefully it’s not too biased a number to anchor on!)
Depends on how much of their data they’d have to back up like this. If every bit ever produced or operated on instead had to be be 25 bits — that seems like a big fitness hit. But if they’re only this paranoid about a few crucial files (e.g. the minds of a few decision-makers), then that’s cheap.
And there’s another question about how much stability contributes to fitness. In humans, cancer tends to not be great for fitness. Analogously, it’s possible that most random errors in future civilizations would look less like slowly corrupting values and more like a coordinated whole splintering into squabbling factions that can easily be conquered by a unified enemy. If so, you might think that an institution that cared about stopping value-drift and an institution that didn’t would both have a similarly large interest in preventing random errors.
Also, by the same token, even if there is a “singleton” at some relatively early time, mightn’t it prefer to take on a non-negligible risk of value drift later in time if it means being able to, say, 10x its effective storage capacity in the meantime?
The counter-argument is that it will be super rich regardless, so it seems like satiable value systems would be happy to spend a lot on preventing really bad events from happening with small probability. Whereas instabiable value systems would notice that most resources are in the cosmos, and so also be obsessed with avoiding unwanted value drift. But yeah, if the values contain a pure time preference, and/or doesn’t care that much about the most probable types of value drift, then it’s possible that they wouldn’t deem the investment worth it.
Cool, thanks for thinking this through!
This is super speculative of course, but if the future involves competition between different civilizations / value systems, do you think having to devote say 96% (i.e. 24⁄25) of a civilization’s storage capacity to redundancy would significantly weaken its fitness? I guess it would depend on what fraction of total resources are spent on information storage...?
Also, by the same token, even if there is a “singleton” at some relatively early time, mightn’t it prefer to take on a non-negligible risk of value drift later in time if it means being able to, say, 10x its effective storage capacity in the meantime?
(I know your 24⁄25 was a conservative estimate in some ways; on the other hand it only addresses the first billion years, which is arguably only a small fraction of the possible future, so hopefully it’s not too biased a number to anchor on!)
Depends on how much of their data they’d have to back up like this. If every bit ever produced or operated on instead had to be be 25 bits — that seems like a big fitness hit. But if they’re only this paranoid about a few crucial files (e.g. the minds of a few decision-makers), then that’s cheap.
And there’s another question about how much stability contributes to fitness. In humans, cancer tends to not be great for fitness. Analogously, it’s possible that most random errors in future civilizations would look less like slowly corrupting values and more like a coordinated whole splintering into squabbling factions that can easily be conquered by a unified enemy. If so, you might think that an institution that cared about stopping value-drift and an institution that didn’t would both have a similarly large interest in preventing random errors.
The counter-argument is that it will be super rich regardless, so it seems like satiable value systems would be happy to spend a lot on preventing really bad events from happening with small probability. Whereas instabiable value systems would notice that most resources are in the cosmos, and so also be obsessed with avoiding unwanted value drift. But yeah, if the values contain a pure time preference, and/or doesn’t care that much about the most probable types of value drift, then it’s possible that they wouldn’t deem the investment worth it.