I want to begin by thanking Owen Cotton-Barratt and Rose Hadshar for their thoughtful and important chapter. Their willingness to examine what longtermist societies might actually look like—moving beyond marginal analysis to whole-system thinking—opens necessary terrain. This essay is offered in that same spirit of serious engagement, not as refutation but as extension.
My response is constructive, though it may read as fundamental disagreement. I believe we share a deep concern: how do we enable human flourishing to persist? Where we differ, I think, is in our conceptual starting point, and this difference ramifies through everything that follows.
Two frameworks for thinking about persistence:
Cotton-Barratt and Hadshar work from what might be called a projection framework: existence is distributed across time, and the question is how to allocate resources between temporal slices—present people now, future people later. Within this framework, their insight is important: even extreme longtermism requires substantial investment in present welfare for instrumental reasons. People whose basic needs aren’t met cannot do complex work.
My essay works from what might be called a process framework: existence is not quantity distributed across time but a continuous adaptive process. There is no “present existence” separate from “future existence”—only ongoing maintenance of adaptive capacity. The question becomes not how to optimize for distant projected outcomes, but whether we’re maintaining the structures that enable any outcomes at all.
Why this difference matters:
These aren’t just semantic alternatives. They lead to different institutional designs, different understandings of risk, and different responses to uncertainty.
Cotton-Barratt and Hadshar recognize that instrumental reasons require present welfare. I’m suggesting something stronger: that the process of maintaining present adaptive capacity—the sense-learn-adapt-coordinate-repair loop—isn’t instrumental to distant goals but constitutive of what persistence means. The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.
This leads to seeing different risks. The incentive gradient I describe—the structural drift toward configurations that optimize measurable proxies while degrading adaptive capacity—isn’t visible from a projection framework because it looks like progress on longtermist goals right up until the system can no longer adapt to surprises.
What I hope this contributes:
Cotton-Barratt and Hadshar’s analysis helps us think carefully about constraints and resource allocation. Their distinction between partial and strict longtermism, their attention to legitimacy concerns, their recognition of instrumental value—all of this is valuable.
My hope is that the process framework adds something complementary: a way to think about systemic resilience, about what makes persistence possible in the face of deep uncertainty, about why maintaining diversity, autonomy, error correction, and genuine interdependence might not be constraints on longtermism but prerequisites for anything to persist at all.
An invitation:
I’d be genuinely curious to hear how Cotton-Barratt and Hadshar see this difference. Is it a meaningful distinction? Are these frameworks reconcilable at different scales of analysis? When would we know which better serves long-term flourishing?
Perhaps the most important test is this: when unforeseen challenges arrive—as they inevitably will—which approach has preserved the adaptive capacity to sense them early, learn from evidence, coordinate responses, and iterate toward solutions?
I suspect we all want the same thing: a future where human flourishing continues. The question is how we think about—and design for—that persistence. I offer this essay as one contribution to that ongoing conversation.
A note on method: For transparency, I used Claude Sonnet 4.5 and ChatGPT-5 as thinking partners and writing tools for this essay—for structure, clarity, and articulation. The core framework, however, emerges from my hands-on work with dynamic network modeling of infectious diseases, and my training across biology, economics, and philosophy. The loop-maintenance perspective reflects years of thinking and exploration and was sparked by conceptual reflection on oak trees. The ideas are mine; the AI helped me say them clearly.
My impression is that although your essay frames this as a deep disagreement, in fact you’re reacting to something that we’re not saying. I basically agree with the heart of the content here—that there are serious failure modes to be scared of if attempting to orient to the long term, and that something like loop-preservation is (along with the various more prosaic welfare goods we discussed) essential for the health of even a strict longtermist society.
However, I think that what we wrote may have been compatible with the view that you have such a negative reaction to, and at minimum I wish that we’d spent some more words exploring this kind of dynamic. So I appreciate your response.
Thanks for the generous response. You write that we “may have been compatible” and I’m “reacting to something you’re not saying.”
Here’s my concern: I’ve come to recognize that reality operates as a dynamic network—nodes (people, institutions) whose capacity is constituted by the relationships among them. This isn’t just a modeling choice; it’s how cities function, how pandemics spread, how states maintain capacity. You don’t work from this explicit recognition.
This creates an asymmetry. Once you see reality as a network, your Section 5 framework becomes incompatible with mine—not just incomplete, but incoherent. You explicitly frame the state as separate from people, optimizing for longtermist goals while managing preferences as constraints. But from the network perspective, this separation doesn’t exist—the state’s capacity just IS those relationships. You can’t optimize one while managing the other.
Let me try to say this more directly: I’ve come to understand my own intelligence as existing not AT my neurons, but BETWEEN them—as a pattern of activation across connections. I am the edge, not the node. And I see society the same way: capacity isn’t located IN institutions, it emerges FROM relationships. From this perspective, your Section 5 (state separate from people) isn’t a simplification—it’s treating edges as if they were nodes, which fundamentally misunderstands what state capacity is.
That’s the asymmetry: your explicit framing (state separate from people) is incompatible with how I now understand reality. But if you haven’t recognized the network structure, you’d just see my essay as “adding important considerations” rather than revealing a foundational incompatibility.
Thanks for your post AJ, and esp this comment which I found clarifying.
I’d be genuinely curious to hear how Cotton-Barratt and Hadshar see this difference. Is it a meaningful distinction? Are these frameworks reconcilable at different scales of analysis? When would we know which better serves long-term flourishing?
I’ve only skimmed your post, and haven’t read what me and Owen wrote in several years, but my quick take is:
We’re saying ‘within a particular longtermist frame, it’s notable that it’s still rational to allocate resources to neartermist ends, for instrumental reasons’
I think you agree with this
Since writing that essay, I’m now more worried about AI making humans instrumentally obsolete, in a way that would weaken this dynamic a lot (I’m thinking of stuff like the intelligence curse). So I don’t actually feel confident this is true any more.
I think you are saying ‘but that is not a good frame, and in fact normatively we should care about some of those things intrinsically’
I agree, at least partially. I don’t think we intended to endorse that particular longtermist frame—just wanted to make the argument that even if you have it, you should still care about neartermist stuff. (And actually, caring instrinsically about neartermist stuff is part of what motivated making the argument, iirc.)
I vibed with some of your writing on this, e.g. “The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.”
I’m not a straight out yes—I think Wednesday in a million years might matter much more than this Tuesday morning, and am pretty convinced of some aspects of longtermism. But I agree with you in putting intrinsic value on the present moment and people’s experiences in it
So my guess is, you have a fundamental disagreement with some version of longtermism, but less disagreement with me than you thought.
Thank you for engaging, and especially for the intelligence curse point—that’s exactly the structural issue I’m trying to get at.
You suggest I’m arguing “we should care about some of those things intrinsically.” Let me use AGI as an example to show why I don’t think this is about intrinsic value at all:
What would an AGI need to persist for a million years?
Not “what targets should it optimize for” but “what maintains the AGI itself across that timespan?”
I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).
An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn’t preparation for the target. It IS what persistence consists of.
I think the same logic applies to longtermist societies. The question would shift from “how to allocate resources between present and future” to “are we maintaining or destroying the adaptive loop properties that enable any future to exist?” That changes what institutions would need to do—the essay explores some specific examples of what this might look like.
Does the AGI example help clarify the reframe I’m proposing?
Afterword: A Note of Appreciation and Reflection
I want to begin by thanking Owen Cotton-Barratt and Rose Hadshar for their thoughtful and important chapter. Their willingness to examine what longtermist societies might actually look like—moving beyond marginal analysis to whole-system thinking—opens necessary terrain. This essay is offered in that same spirit of serious engagement, not as refutation but as extension.
My response is constructive, though it may read as fundamental disagreement. I believe we share a deep concern: how do we enable human flourishing to persist? Where we differ, I think, is in our conceptual starting point, and this difference ramifies through everything that follows.
Two frameworks for thinking about persistence:
Cotton-Barratt and Hadshar work from what might be called a projection framework: existence is distributed across time, and the question is how to allocate resources between temporal slices—present people now, future people later. Within this framework, their insight is important: even extreme longtermism requires substantial investment in present welfare for instrumental reasons. People whose basic needs aren’t met cannot do complex work.
My essay works from what might be called a process framework: existence is not quantity distributed across time but a continuous adaptive process. There is no “present existence” separate from “future existence”—only ongoing maintenance of adaptive capacity. The question becomes not how to optimize for distant projected outcomes, but whether we’re maintaining the structures that enable any outcomes at all.
Why this difference matters:
These aren’t just semantic alternatives. They lead to different institutional designs, different understandings of risk, and different responses to uncertainty.
Cotton-Barratt and Hadshar recognize that instrumental reasons require present welfare. I’m suggesting something stronger: that the process of maintaining present adaptive capacity—the sense-learn-adapt-coordinate-repair loop—isn’t instrumental to distant goals but constitutive of what persistence means. The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.
This leads to seeing different risks. The incentive gradient I describe—the structural drift toward configurations that optimize measurable proxies while degrading adaptive capacity—isn’t visible from a projection framework because it looks like progress on longtermist goals right up until the system can no longer adapt to surprises.
What I hope this contributes:
Cotton-Barratt and Hadshar’s analysis helps us think carefully about constraints and resource allocation. Their distinction between partial and strict longtermism, their attention to legitimacy concerns, their recognition of instrumental value—all of this is valuable.
My hope is that the process framework adds something complementary: a way to think about systemic resilience, about what makes persistence possible in the face of deep uncertainty, about why maintaining diversity, autonomy, error correction, and genuine interdependence might not be constraints on longtermism but prerequisites for anything to persist at all.
An invitation:
I’d be genuinely curious to hear how Cotton-Barratt and Hadshar see this difference. Is it a meaningful distinction? Are these frameworks reconcilable at different scales of analysis? When would we know which better serves long-term flourishing?
Perhaps the most important test is this: when unforeseen challenges arrive—as they inevitably will—which approach has preserved the adaptive capacity to sense them early, learn from evidence, coordinate responses, and iterate toward solutions?
I suspect we all want the same thing: a future where human flourishing continues. The question is how we think about—and design for—that persistence. I offer this essay as one contribution to that ongoing conversation.
A note on method: For transparency, I used Claude Sonnet 4.5 and ChatGPT-5 as thinking partners and writing tools for this essay—for structure, clarity, and articulation. The core framework, however, emerges from my hands-on work with dynamic network modeling of infectious diseases, and my training across biology, economics, and philosophy. The loop-maintenance perspective reflects years of thinking and exploration and was sparked by conceptual reflection on oak trees. The ideas are mine; the AI helped me say them clearly.
Thanks AJ!
My impression is that although your essay frames this as a deep disagreement, in fact you’re reacting to something that we’re not saying. I basically agree with the heart of the content here—that there are serious failure modes to be scared of if attempting to orient to the long term, and that something like loop-preservation is (along with the various more prosaic welfare goods we discussed) essential for the health of even a strict longtermist society.
However, I think that what we wrote may have been compatible with the view that you have such a negative reaction to, and at minimum I wish that we’d spent some more words exploring this kind of dynamic. So I appreciate your response.
Thanks for the generous response. You write that we “may have been compatible” and I’m “reacting to something you’re not saying.”
Here’s my concern: I’ve come to recognize that reality operates as a dynamic network—nodes (people, institutions) whose capacity is constituted by the relationships among them. This isn’t just a modeling choice; it’s how cities function, how pandemics spread, how states maintain capacity. You don’t work from this explicit recognition.
This creates an asymmetry. Once you see reality as a network, your Section 5 framework becomes incompatible with mine—not just incomplete, but incoherent. You explicitly frame the state as separate from people, optimizing for longtermist goals while managing preferences as constraints. But from the network perspective, this separation doesn’t exist—the state’s capacity just IS those relationships. You can’t optimize one while managing the other.
Let me try to say this more directly: I’ve come to understand my own intelligence as existing not AT my neurons, but BETWEEN them—as a pattern of activation across connections. I am the edge, not the node. And I see society the same way: capacity isn’t located IN institutions, it emerges FROM relationships. From this perspective, your Section 5 (state separate from people) isn’t a simplification—it’s treating edges as if they were nodes, which fundamentally misunderstands what state capacity is.
That’s the asymmetry: your explicit framing (state separate from people) is incompatible with how I now understand reality. But if you haven’t recognized the network structure, you’d just see my essay as “adding important considerations” rather than revealing a foundational incompatibility.
Does this help clarify where I’m coming from?
Thanks for your post AJ, and esp this comment which I found clarifying.
I’ve only skimmed your post, and haven’t read what me and Owen wrote in several years, but my quick take is:
We’re saying ‘within a particular longtermist frame, it’s notable that it’s still rational to allocate resources to neartermist ends, for instrumental reasons’
I think you agree with this
Since writing that essay, I’m now more worried about AI making humans instrumentally obsolete, in a way that would weaken this dynamic a lot (I’m thinking of stuff like the intelligence curse). So I don’t actually feel confident this is true any more.
I think you are saying ‘but that is not a good frame, and in fact normatively we should care about some of those things intrinsically’
I agree, at least partially. I don’t think we intended to endorse that particular longtermist frame—just wanted to make the argument that even if you have it, you should still care about neartermist stuff. (And actually, caring instrinsically about neartermist stuff is part of what motivated making the argument, iirc.)
I vibed with some of your writing on this, e.g. “The Tuesday-morning maintenance network isn’t preparation for a future we’re aiming toward; it is the future, continuously instantiated.”
I’m not a straight out yes—I think Wednesday in a million years might matter much more than this Tuesday morning, and am pretty convinced of some aspects of longtermism. But I agree with you in putting intrinsic value on the present moment and people’s experiences in it
So my guess is, you have a fundamental disagreement with some version of longtermism, but less disagreement with me than you thought.
Thank you for engaging, and especially for the intelligence curse point—that’s exactly the structural issue I’m trying to get at.
You suggest I’m arguing “we should care about some of those things intrinsically.” Let me use AGI as an example to show why I don’t think this is about intrinsic value at all:
What would an AGI need to persist for a million years?
Not “what targets should it optimize for” but “what maintains the AGI itself across that timespan?”
I think the answer is: diversity (multiple approaches for unforeseen challenges), error correction (detecting when models fail), adaptive capacity (sensing and learning, not just executing), and substrate maintenance (keeping the infrastructure running).
An AGI optimizing toward distant targets while destroying these properties would be destroying its own substrate for persistence. The daily maintenance—power, sensors, error detection—isn’t preparation for the target. It IS what persistence consists of.
I think the same logic applies to longtermist societies. The question would shift from “how to allocate resources between present and future” to “are we maintaining or destroying the adaptive loop properties that enable any future to exist?” That changes what institutions would need to do—the essay explores some specific examples of what this might look like.
Does the AGI example help clarify the reframe I’m proposing?