d is a one-off action taken at t=0 whose effects accrue over time, analogous to L. (I could be wrong, but I’m proposing that the”long-term” in longtermism refers to utility obtained at different times, not actions taken at different times, so removing the latter helps bring the definition of longtermism into focus.
This condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there’s no need for present generations to care about improving the future.
Is what you’re saying that actions could vary on their short-term-goodness and long-term goodness, such that short/long-term goodness are perfectly correlated? To me, this is a world where longtermism is true—we can tell an agent’s value from its long-term value, and also a world where shorttermism is true. Generations only need to care about the future if longtermism works but other heuristics fail. To your question, rt(d) is just the utility at time t under d.
In my setup, I could say ∫∞t=0MtNtu(ct)e−ρtdt≈∫∞t=TMtNtu(ct)e−ρtdt for some large T; ie, generations 0 to T−1 contribute basically nothing to total social utility . But I don’t think this captures longtermism, because this is consistent with the social planner allocating no resources to safety work (and all resources to consumption of the current generation); the condition puts no constraints on L∗. In other words, this condition only matches the first of three criteria that Will lists:
(i) Those who live at future times matter just as much, morally, as those who live today;
(ii) Society currently privileges those who live today above those who will live in the future; and
(iii) We should take action to rectify that, and help ensure the long-run future goes well.
Interesting—defining longtermism as rectifying future disprivelege. This is different from what I was trying to model. Honestly, it seems different from all the other definitions. Is this the sort of longtermism that you want to model?
If I was trying to model this, I would want to make reference to a baseline level of disparity, given inaction, and then consider how a (possibly causal) intervention could improve that.
Do you think Will’s three criteria are inconsistent with the informal definition I used in the OP (“what most matters about our actions is their very long term effects”)?
d is a one-off action taken at t=0 whose effects accrue over time, analogous to L. (I could be wrong, but I’m proposing that the”long-term” in longtermism refers to utility obtained at different times, not actions taken at different times, so removing the latter helps bring the definition of longtermism into focus.
Is what you’re saying that actions could vary on their short-term-goodness and long-term goodness, such that short/long-term goodness are perfectly correlated? To me, this is a world where longtermism is true—we can tell an agent’s value from its long-term value, and also a world where shorttermism is true. Generations only need to care about the future if longtermism works but other heuristics fail. To your question, rt(d) is just the utility at time t under d.
In my setup, I could say ∫∞t=0MtNtu(ct)e−ρtdt≈∫∞t=TMtNtu(ct)e−ρtdt for some large T; ie, generations 0 to T−1 contribute basically nothing to total social utility . But I don’t think this captures longtermism, because this is consistent with the social planner allocating no resources to safety work (and all resources to consumption of the current generation); the condition puts no constraints on L∗. In other words, this condition only matches the first of three criteria that Will lists:
Interesting—defining longtermism as rectifying future disprivelege. This is different from what I was trying to model. Honestly, it seems different from all the other definitions. Is this the sort of longtermism that you want to model?
If I was trying to model this, I would want to make reference to a baseline level of disparity, given inaction, and then consider how a (possibly causal) intervention could improve that.
Do you think Will’s three criteria are inconsistent with the informal definition I used in the OP (“what most matters about our actions is their very long term effects”)?
Not inconsistent, but I think Will’s criteria are just one of many possible reasons that this might be the case.
On Will’s definition, longtermism and shorttermism are mutually exclusive.