Thank you for engaging, and especially for the intelligence curse point—that’s exactly the structural issue I’m trying to get at.
You suggest I’m arguing “we should care about some of those things intrinsically.” Let me use AGI as an example to show why I don’t think this is about intrinsic value at all:
What would an AGI need to persist for a million years?
Not “what targets should it optimize for” but “what maintains the AGI itself across that timespan?”
I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).
An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn’t preparation for the target. It IS what persistence consists of.
I think the same logic applies to longtermist societies. The question would shift from “how to allocate resources between present and future” to “are we maintaining or destroying the adaptive loop properties that enable any future to exist?” That changes what institutions would need to do—the essay explores some specific examples of what this might look like.
Does the AGI example help clarify the reframe I’m proposing?
Thank you for engaging, and especially for the intelligence curse point—that’s exactly the structural issue I’m trying to get at.
You suggest I’m arguing “we should care about some of those things intrinsically.” Let me use AGI as an example to show why I don’t think this is about intrinsic value at all:
What would an AGI need to persist for a million years?
Not “what targets should it optimize for” but “what maintains the AGI itself across that timespan?”
I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).
An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn’t preparation for the target. It IS what persistence consists of.
I think the same logic applies to longtermist societies. The question would shift from “how to allocate resources between present and future” to “are we maintaining or destroying the adaptive loop properties that enable any future to exist?” That changes what institutions would need to do—the essay explores some specific examples of what this might look like.
Does the AGI example help clarify the reframe I’m proposing?