FWIW, someone could reject longtermism for reasons other than specific person-affecting views or even pure time preferences.
Even without a universal present, there’s still your present (and past), and you can do ethics relative to that. Maybe this doesn’t seem impartial enough, and could lead to agents with the same otherwise impartial ethical views and same descriptive views disagreeing about what to do, and those are undesirable?
OTOH, causality is still directed and we can still partially order events that way or via agreement across reference frames. The descendants of humanity, say, 1000 years from your present (well, humanity here in our part of the multiverse, say) are still all after your actions now, probably no matter what (physically valid) reference frame you consider, maybe barring time travel. This is because humans’ reference frames are all very similar to one another, as differences in velocity, acceleration, force and gravity are generally very small.
So, one approach could be to rank or estimate the value of your available options on each reference frame and weigh across them, or look for agreement or look for Pareto improvements. Right now for you, the different reference frames should agree, but they could come apart for you or other agents in the future if/when we or our descendants start colonizing space, traveling at substantial fractions of the speed of light.
Also, people who won’t come to exist don’t exist under the B-theory, so they can’t experience harm. Maybe they’re harmed by not existing, but they won’t be around to experience that. Future people could have interests, but if we only recognize interests for people that actually exist under the B-theory, then extinction wouldn’t be bad for those who never come to exist as a result, because they don’t exist under the B-theory.
Great response Michael! Made me realise I’m conflating a lot of things (so bear with me)
By longtermism, I really mean MacAskill’s:
future people matter
there could be a lot of them
our actions now could affect their future (in some morally significant way)
And in that sense, I really think only bullet 1 is the moral claim. The other two are empirical, about what our forecasts are, and what our morally relevant actions are. I get the sense that those who reject longtermism want to reject it on moral grounds, not empirical ones, so they must reject bullet-point 1. The main ways of doing so are, as you mention, person-affecting views or a pure-rate of time preference, both of which I am sceptical are without their difficult bullets to bite.[1]
The argument I want to propose is this:
The moral theories we regard as correct ought to cohere with our current best understandings of physics[2]
The Special Theory of Relativity (STR) is part of our current best understanding of physics
STR implies that there is no universal present moment (i.e. without an observer’s explicit frame of reference)
Some set of person-affecting views [P] assume that there is a universal present moment (i.e. that we can clearly separate some people as not in the present and therefore not worthy of moral concern, and others which do have this property)
From 3 & 4, STR and P are contradictory
From 1 & 5, we ought to reject all person-affecting views that are in P
And I would argue that a common naïve negative reactions to longtermism (along the liens of) “potential future people don’t matter, and it is evil to do anything for them at the expense of present people since they don’t exist” is in P, and therefore ought to be rejected. In fact, the only ways to really get out of this seem to be either that someone’s chosen person-affecting views are not in P, or that 1 is false. The former is open to question of course, and the latter seems highly suspect.
The point about certain people never existing under B-Theory is an interesting counter. Is it possible to have B-Theory without strong determinism (the kind that undermines moral responsibility)? I guess I’d reframe it as under B-Theory, future people exist, and so whoever can affect their wellbeing causally (whether from their reference frame of past, present, or future) ought to do so if they can. You could still believe that is empirically very difficult to do so, but it makes sense to me that outside some special sphere of moral obligation (say my family, children, or those who I have made promises too), the only difference between someone far away in time and close is the extent to which I can causally help them, and not their position in time—just the same way we might say that the reason to be sceptical of charity in different locations in the present is their effectiveness at aid and not their physical location in space.
That causal effectiveness (corresponding to opening bullet point 3) seems to me the most promising response to longtermism, not that those future people exist or not, or have moral value (bullet point 1), but it seems that’s what most objections lean on.
I don’t think having to bite bullets is a knockdown refutation, but if your view has them I want to see evidence you’ve included them in your meal, especially if criticise others for not biting their own bullets
FWIW, someone could reject longtermism for reasons other than specific person-affecting views or even pure time preferences.
Even without a universal present, there’s still your present (and past), and you can do ethics relative to that. Maybe this doesn’t seem impartial enough, and could lead to agents with the same otherwise impartial ethical views and same descriptive views disagreeing about what to do, and those are undesirable?
OTOH, causality is still directed and we can still partially order events that way or via agreement across reference frames. The descendants of humanity, say, 1000 years from your present (well, humanity here in our part of the multiverse, say) are still all after your actions now, probably no matter what (physically valid) reference frame you consider, maybe barring time travel. This is because humans’ reference frames are all very similar to one another, as differences in velocity, acceleration, force and gravity are generally very small.
So, one approach could be to rank or estimate the value of your available options on each reference frame and weigh across them, or look for agreement or look for Pareto improvements. Right now for you, the different reference frames should agree, but they could come apart for you or other agents in the future if/when we or our descendants start colonizing space, traveling at substantial fractions of the speed of light.
Also, people who won’t come to exist don’t exist under the B-theory, so they can’t experience harm. Maybe they’re harmed by not existing, but they won’t be around to experience that. Future people could have interests, but if we only recognize interests for people that actually exist under the B-theory, then extinction wouldn’t be bad for those who never come to exist as a result, because they don’t exist under the B-theory.
Great response Michael! Made me realise I’m conflating a lot of things (so bear with me)
By longtermism, I really mean MacAskill’s:
future people matter
there could be a lot of them
our actions now could affect their future (in some morally significant way)
And in that sense, I really think only bullet 1 is the moral claim. The other two are empirical, about what our forecasts are, and what our morally relevant actions are. I get the sense that those who reject longtermism want to reject it on moral grounds, not empirical ones, so they must reject bullet-point 1. The main ways of doing so are, as you mention, person-affecting views or a pure-rate of time preference, both of which I am sceptical are without their difficult bullets to bite.[1]
The argument I want to propose is this:
The moral theories we regard as correct ought to cohere with our current best understandings of physics[2]
The Special Theory of Relativity (STR) is part of our current best understanding of physics
STR implies that there is no universal present moment (i.e. without an observer’s explicit frame of reference)
Some set of person-affecting views [P] assume that there is a universal present moment (i.e. that we can clearly separate some people as not in the present and therefore not worthy of moral concern, and others which do have this property)
From 3 & 4, STR and P are contradictory
From 1 & 5, we ought to reject all person-affecting views that are in P
And I would argue that a common naïve negative reactions to longtermism (along the liens of) “potential future people don’t matter, and it is evil to do anything for them at the expense of present people since they don’t exist” is in P, and therefore ought to be rejected. In fact, the only ways to really get out of this seem to be either that someone’s chosen person-affecting views are not in P, or that 1 is false. The former is open to question of course, and the latter seems highly suspect.
The point about certain people never existing under B-Theory is an interesting counter. Is it possible to have B-Theory without strong determinism (the kind that undermines moral responsibility)? I guess I’d reframe it as under B-Theory, future people exist, and so whoever can affect their wellbeing causally (whether from their reference frame of past, present, or future) ought to do so if they can. You could still believe that is empirically very difficult to do so, but it makes sense to me that outside some special sphere of moral obligation (say my family, children, or those who I have made promises too), the only difference between someone far away in time and close is the extent to which I can causally help them, and not their position in time—just the same way we might say that the reason to be sceptical of charity in different locations in the present is their effectiveness at aid and not their physical location in space.
That causal effectiveness (corresponding to opening bullet point 3) seems to me the most promising response to longtermism, not that those future people exist or not, or have moral value (bullet point 1), but it seems that’s what most objections lean on.
I don’t think having to bite bullets is a knockdown refutation, but if your view has them I want to see evidence you’ve included them in your meal, especially if criticise others for not biting their own bullets
Interestingly, the corollary of this is that while one cannot derive an ought from an is, one can derive an is from an ought!