status: very rambly. This is an idea I want to explore in an upcoming post about longtermism, would be grateful to anyoneâs thoughts. For more detailed context, see https://ââplato.stanford.edu/ââentries/ââtime/ââ for debate on the nature of time in philosophy
Does rejecting longtermism require rejecting the B-Theory of Time (i.e. eternalism, the view that the past, present, and future have the same ontological status)? Saying that future people donât exist (and therefore canât be harmed, canât lose out by not existing, donât have the same moral rights as present âexistingâ people) also implies that the future they live in (i.e. the city, country, planet, solar system, galaxy etc) doesnât exist. This seems to fly in the face of our best understandings of physics (i.e. Relativity and Quantum Physics donât have a place for a special âpresentâ afaik, in fact Relativity suggests that there is no such thing as a universal present)[1] Iâm not sure people grapple with that contradiction.
Furthermore, if you want to claim the special âpresentâ moment does exist, and that our moral obligations only hold in that frame of time, doesnât that mean that our obligations to the past donât matter? Many have criticised Bostromâs use of âmere ripplesâ to describe historical atrocities,[2] but doesnât believing in a unique present moment imply that the past doesnât exist, and hence those atrocities donât exist either and that we have no obligations that refer to past events or people? One could get around this by assuming a âgrowing blockâ theory of time, where the past exists, as does the present, but the future doesnât. But if you believe in a special, unique present then asserting the past exists but the future doesnât seems no more valid than the other way around?
I actually think that has legs as more than a niche philosophical point, but would be very interested in hearing othersâ thoughts.
FWIW, someone could reject longtermism for reasons other than specific person-affecting views or even pure time preferences.
Even without a universal present, thereâs still your present (and past), and you can do ethics relative to that. Maybe this doesnât seem impartial enough, and could lead to agents with the same otherwise impartial ethical views and same descriptive views disagreeing about what to do, and those are undesirable?
OTOH, causality is still directed and we can still partially order events that way or via agreement across reference frames. The descendants of humanity, say, 1000 years from your present (well, humanity here in our part of the multiverse, say) are still all after your actions now, probably no matter what (physically valid) reference frame you consider, maybe barring time travel. This is because humansâ reference frames are all very similar to one another, as differences in velocity, acceleration, force and gravity are generally very small.
So, one approach could be to rank or estimate the value of your available options on each reference frame and weigh across them, or look for agreement or look for Pareto improvements. Right now for you, the different reference frames should agree, but they could come apart for you or other agents in the future if/âwhen we or our descendants start colonizing space, traveling at substantial fractions of the speed of light.
Also, people who wonât come to exist donât exist under the B-theory, so they canât experience harm. Maybe theyâre harmed by not existing, but they wonât be around to experience that. Future people could have interests, but if we only recognize interests for people that actually exist under the B-theory, then extinction wouldnât be bad for those who never come to exist as a result, because they donât exist under the B-theory.
Great response Michael! Made me realise Iâm conflating a lot of things (so bear with me)
By longtermism, I really mean MacAskillâs:
future people matter
there could be a lot of them
our actions now could affect their future (in some morally significant way)
And in that sense, I really think only bullet 1 is the moral claim. The other two are empirical, about what our forecasts are, and what our morally relevant actions are. I get the sense that those who reject longtermism want to reject it on moral grounds, not empirical ones, so they must reject bullet-point 1. The main ways of doing so are, as you mention, person-affecting views or a pure-rate of time preference, both of which I am sceptical are without their difficult bullets to bite.[1]
The argument I want to propose is this:
The moral theories we regard as correct ought to cohere with our current best understandings of physics[2]
The Special Theory of Relativity (STR) is part of our current best understanding of physics
STR implies that there is no universal present moment (i.e. without an observerâs explicit frame of reference)
Some set of person-affecting views [P] assume that there is a universal present moment (i.e. that we can clearly separate some people as not in the present and therefore not worthy of moral concern, and others which do have this property)
From 3 & 4, STR and P are contradictory
From 1 & 5, we ought to reject all person-affecting views that are in P
And I would argue that a common naĂŻve negative reactions to longtermism (along the liens of) âpotential future people donât matter, and it is evil to do anything for them at the expense of present people since they donât existâ is in P, and therefore ought to be rejected. In fact, the only ways to really get out of this seem to be either that someoneâs chosen person-affecting views are not in P, or that 1 is false. The former is open to question of course, and the latter seems highly suspect.
The point about certain people never existing under B-Theory is an interesting counter. Is it possible to have B-Theory without strong determinism (the kind that undermines moral responsibility)? I guess Iâd reframe it as under B-Theory, future people exist, and so whoever can affect their wellbeing causally (whether from their reference frame of past, present, or future) ought to do so if they can. You could still believe that is empirically very difficult to do so, but it makes sense to me that outside some special sphere of moral obligation (say my family, children, or those who I have made promises too), the only difference between someone far away in time and close is the extent to which I can causally help them, and not their position in timeâjust the same way we might say that the reason to be sceptical of charity in different locations in the present is their effectiveness at aid and not their physical location in space.
That causal effectiveness (corresponding to opening bullet point 3) seems to me the most promising response to longtermism, not that those future people exist or not, or have moral value (bullet point 1), but it seems thatâs what most objections lean on.
I donât think having to bite bullets is a knockdown refutation, but if your view has them I want to see evidence youâve included them in your meal, especially if criticise others for not biting their own bullets
status: very rambly. This is an idea I want to explore in an upcoming post about longtermism, would be grateful to anyoneâs thoughts. For more detailed context, see https://ââplato.stanford.edu/ââentries/ââtime/ââ for debate on the nature of time in philosophy
Does rejecting longtermism require rejecting the B-Theory of Time (i.e. eternalism, the view that the past, present, and future have the same ontological status)? Saying that future people donât exist (and therefore canât be harmed, canât lose out by not existing, donât have the same moral rights as present âexistingâ people) also implies that the future they live in (i.e. the city, country, planet, solar system, galaxy etc) doesnât exist. This seems to fly in the face of our best understandings of physics (i.e. Relativity and Quantum Physics donât have a place for a special âpresentâ afaik, in fact Relativity suggests that there is no such thing as a universal present)[1] Iâm not sure people grapple with that contradiction.
Furthermore, if you want to claim the special âpresentâ moment does exist, and that our moral obligations only hold in that frame of time, doesnât that mean that our obligations to the past donât matter? Many have criticised Bostromâs use of âmere ripplesâ to describe historical atrocities,[2] but doesnât believing in a unique present moment imply that the past doesnât exist, and hence those atrocities donât exist either and that we have no obligations that refer to past events or people? One could get around this by assuming a âgrowing blockâ theory of time, where the past exists, as does the present, but the future doesnât. But if you believe in a special, unique present then asserting the past exists but the future doesnât seems no more valid than the other way around?
I actually think that has legs as more than a niche philosophical point, but would be very interested in hearing othersâ thoughts.
Physicists please correct me if Iâve made egregious mistakes of interpretation here
And Iâm not defending that language here
FWIW, someone could reject longtermism for reasons other than specific person-affecting views or even pure time preferences.
Even without a universal present, thereâs still your present (and past), and you can do ethics relative to that. Maybe this doesnât seem impartial enough, and could lead to agents with the same otherwise impartial ethical views and same descriptive views disagreeing about what to do, and those are undesirable?
OTOH, causality is still directed and we can still partially order events that way or via agreement across reference frames. The descendants of humanity, say, 1000 years from your present (well, humanity here in our part of the multiverse, say) are still all after your actions now, probably no matter what (physically valid) reference frame you consider, maybe barring time travel. This is because humansâ reference frames are all very similar to one another, as differences in velocity, acceleration, force and gravity are generally very small.
So, one approach could be to rank or estimate the value of your available options on each reference frame and weigh across them, or look for agreement or look for Pareto improvements. Right now for you, the different reference frames should agree, but they could come apart for you or other agents in the future if/âwhen we or our descendants start colonizing space, traveling at substantial fractions of the speed of light.
Also, people who wonât come to exist donât exist under the B-theory, so they canât experience harm. Maybe theyâre harmed by not existing, but they wonât be around to experience that. Future people could have interests, but if we only recognize interests for people that actually exist under the B-theory, then extinction wouldnât be bad for those who never come to exist as a result, because they donât exist under the B-theory.
Great response Michael! Made me realise Iâm conflating a lot of things (so bear with me)
By longtermism, I really mean MacAskillâs:
future people matter
there could be a lot of them
our actions now could affect their future (in some morally significant way)
And in that sense, I really think only bullet 1 is the moral claim. The other two are empirical, about what our forecasts are, and what our morally relevant actions are. I get the sense that those who reject longtermism want to reject it on moral grounds, not empirical ones, so they must reject bullet-point 1. The main ways of doing so are, as you mention, person-affecting views or a pure-rate of time preference, both of which I am sceptical are without their difficult bullets to bite.[1]
The argument I want to propose is this:
The moral theories we regard as correct ought to cohere with our current best understandings of physics[2]
The Special Theory of Relativity (STR) is part of our current best understanding of physics
STR implies that there is no universal present moment (i.e. without an observerâs explicit frame of reference)
Some set of person-affecting views [P] assume that there is a universal present moment (i.e. that we can clearly separate some people as not in the present and therefore not worthy of moral concern, and others which do have this property)
From 3 & 4, STR and P are contradictory
From 1 & 5, we ought to reject all person-affecting views that are in P
And I would argue that a common naĂŻve negative reactions to longtermism (along the liens of) âpotential future people donât matter, and it is evil to do anything for them at the expense of present people since they donât existâ is in P, and therefore ought to be rejected. In fact, the only ways to really get out of this seem to be either that someoneâs chosen person-affecting views are not in P, or that 1 is false. The former is open to question of course, and the latter seems highly suspect.
The point about certain people never existing under B-Theory is an interesting counter. Is it possible to have B-Theory without strong determinism (the kind that undermines moral responsibility)? I guess Iâd reframe it as under B-Theory, future people exist, and so whoever can affect their wellbeing causally (whether from their reference frame of past, present, or future) ought to do so if they can. You could still believe that is empirically very difficult to do so, but it makes sense to me that outside some special sphere of moral obligation (say my family, children, or those who I have made promises too), the only difference between someone far away in time and close is the extent to which I can causally help them, and not their position in timeâjust the same way we might say that the reason to be sceptical of charity in different locations in the present is their effectiveness at aid and not their physical location in space.
That causal effectiveness (corresponding to opening bullet point 3) seems to me the most promising response to longtermism, not that those future people exist or not, or have moral value (bullet point 1), but it seems thatâs what most objections lean on.
I donât think having to bite bullets is a knockdown refutation, but if your view has them I want to see evidence youâve included them in your meal, especially if criticise others for not biting their own bullets
Interestingly, the corollary of this is that while one cannot derive an ought from an is, one can derive an is from an ought!