Thanks for your post AJ, and esp this comment which I found clarifying.
I’d be genuinely curious to hear how Cotton-Barratt and Hadshar see this difference. Is it a meaningful distinction? Are these frameworks reconcilable at different scales of analysis? When would we know which better serves long-term flourishing?
I’ve only skimmed your post, and haven’t read what me and Owen wrote in several years, but my quick take is:
We’re saying ‘within a particular longtermist frame, it’s notable that it’s still rational to allocate resources to neartermist ends, for instrumental reasons’
I think you agree with this
Since writing that essay, I’m now more worried about AI making humans instrumentally obsolete, in a way that would weaken this dynamic a lot (I’m thinking of stuff like the intelligence curse). So I don’t actually feel confident this is true any more.
I think you are saying ‘but that is not a good frame, and in fact normatively we should care about some of those things intrinsically’
I agree, at least partially. I don’t think we intended to endorse that particular longtermist frame—just wanted to make the argument that even if you have it, you should still care about neartermist stuff. (And actually, caring instrinsically about neartermist stuff is part of what motivated making the argument, iirc.)
I vibed with some of your writing on this, e.g. “The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.”
I’m not a straight out yes—I think Wednesday in a million years might matter much more than this Tuesday morning, and am pretty convinced of some aspects of longtermism. But I agree with you in putting intrinsic value on the present moment and people’s experiences in it
So my guess is, you have a fundamental disagreement with some version of longtermism, but less disagreement with me than you thought.
Thank you for engaging, and especially for the intelligence curse point—that’s exactly the structural issue I’m trying to get at.
You suggest I’m arguing “we should care about some of those things intrinsically.” Let me use AGI as an example to show why I don’t think this is about intrinsic value at all:
What would an AGI need to persist for a million years?
Not “what targets should it optimize for” but “what maintains the AGI itself across that timespan?”
I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).
An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn’t preparation for the target. It IS what persistence consists of.
I think the same logic applies to longtermist societies. The question would shift from “how to allocate resources between present and future” to “are we maintaining or destroying the adaptive loop properties that enable any future to exist?” That changes what institutions would need to do—the essay explores some specific examples of what this might look like.
Does the AGI example help clarify the reframe I’m proposing?
Thanks for your post AJ, and esp this comment which I found clarifying.
I’ve only skimmed your post, and haven’t read what me and Owen wrote in several years, but my quick take is:
We’re saying ‘within a particular longtermist frame, it’s notable that it’s still rational to allocate resources to neartermist ends, for instrumental reasons’
I think you agree with this
Since writing that essay, I’m now more worried about AI making humans instrumentally obsolete, in a way that would weaken this dynamic a lot (I’m thinking of stuff like the intelligence curse). So I don’t actually feel confident this is true any more.
I think you are saying ‘but that is not a good frame, and in fact normatively we should care about some of those things intrinsically’
I agree, at least partially. I don’t think we intended to endorse that particular longtermist frame—just wanted to make the argument that even if you have it, you should still care about neartermist stuff. (And actually, caring instrinsically about neartermist stuff is part of what motivated making the argument, iirc.)
I vibed with some of your writing on this, e.g. “The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.”
I’m not a straight out yes—I think Wednesday in a million years might matter much more than this Tuesday morning, and am pretty convinced of some aspects of longtermism. But I agree with you in putting intrinsic value on the present moment and people’s experiences in it
So my guess is, you have a fundamental disagreement with some version of longtermism, but less disagreement with me than you thought.
Thank you for engaging, and especially for the intelligence curse point—that’s exactly the structural issue I’m trying to get at.
You suggest I’m arguing “we should care about some of those things intrinsically.” Let me use AGI as an example to show why I don’t think this is about intrinsic value at all:
What would an AGI need to persist for a million years?
Not “what targets should it optimize for” but “what maintains the AGI itself across that timespan?”
I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).
An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn’t preparation for the target. It IS what persistence consists of.
I think the same logic applies to longtermist societies. The question would shift from “how to allocate resources between present and future” to “are we maintaining or destroying the adaptive loop properties that enable any future to exist?” That changes what institutions would need to do—the essay explores some specific examples of what this might look like.
Does the AGI example help clarify the reframe I’m proposing?