Thanks for this! Wasn’t expecting pushback on this aspect, so that’s helpful.
I’ll start with a clarification. What I mean by that clause is: For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.
I.e. if you take some world, and move everyone around in time but keep their levels of wellbeing the same, then the new world is just as good as the old world.
(Caveat, I’m sure that there will be technical problems with this principle, so I’m just suggesting it as a first pass. This is in analogy with how I’d define the ideas that race, gender, etc are morally irrelevant: imagine two worlds where everything is the same except that people’s races or genders are different; these are equally good.)
That *does* rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
But it doesn’t rule out the following:
Special relationships. E.g. you can believe that everyone is equally valuable but, because of a special relationship you have with your children, it’s permissible for you to save your child’s life rather than two strangers. Ditto perhaps you have a special relationship with people in your own society that you’re interacting with (and perhaps have obligations of reciprocity towards).
Personal prerogatives. E.g. you can believe that $10 would do more good buying bednets than paying for yourself to go to the movies, but that it’s permissible for you to go to the movies. (Ditto perhaps for spending on present-day projects rather than entirely on long-run projects.)
If you also add the normative assumptions of agent-neutral consequentialism and expected utility theory, and the empirical assumptions that the future is big and affectable, then you do get led to strong or very strong longtermism. But, if you accept those assumptions, then strong or very strong longtermism seems correct.
I’m worried that a weakening where we just claim that future people matter, to some degree, would create too broad a church. In particular: Economists typically suppose something like a 1.5% pure rate of time preference. On this view, even people in a billion years’ time matter. But the amount by which they matter is tiny and, in practice, any effects of our actions beyond a few centuries are irrelevant. I want a definition of longtermism—even a minimal definition—to rule out that view. But then I can’t think of a non-arbitrary stopping point in between that view and my version. And I do think that there are some benefits in terms of inspiringness of my version, too — it’s clear and robust.
What would most move me is if I thought that the weaker version would capture a lot more people. And it’s interesting that Larks has had the experiences they have had.
But it currently seems to me that’s not the case. Aron Vallinder and I ran a survey on people’s attitudes on this issue: here were the results for how much people agree or disagree with the claim that ‘people in the distant future matter just as much as those alive today’:
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.
I would expect the item “People in the distant future matter just as much as those alive today” to produce somewhat inflated levels of agreement. One reason is that I expect that any formulation along the lines of “X people matter just as much as Y people” will encourage agreement, because people don’t want to be seen as explicitly saying that any people matter less than others and agreement seems pretty clearly to be the socially desirable option. Another is that acquiescence bias will increase agreement levels, particularly where the proposition is one that people haven’t really considered before and/or don’t have clearly defined attitudes towards.
Me and Sanjay found pretty different results to this and, as I think he mentioned, we’ll be sharing a writeup of the results soon.
(And agree that this would produce inflated levels of agreement, but feel like “do you endorse this statement?” is the relevant criterion for a definition even if that endorsement is inflated relative to action).
That does rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)
I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn’t rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.
Why not include this in the definition of strong longtermism, but not weak longtermism?
Having longtermism just mean “caring a lot about the long-term future” seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you’re going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, “begs the question”).
When most people first hear the term longtermism, they’re going to hear it in conversation or see it in writing without the definition attached to it. And they are going to assume it means caring a lot about the long-term future. So why define it to mean anything other than that?
On the other hand, anyone who comes across strong longtermism, is much more likely to realize that it’s a very specific technical term, so it seems much more natural to attach a very specific definition to it.
IMHO the most natural name for “people at any time have equal value” should be something like temporal indifference, which more directly suggests that meaning.
Edit: I retract temporal indifference in favor of Holly Elmore’s suggestion of temporal cosmopolitanism.
I agree with the sentiment that clause (i) is stronger than it needs to be. I don’t really think this is because it would be good to include other well-specified positions like exponential discounting, though. It’s more that it’s taking a strong position, and that position isn’t necessary for the work we want the term to do. On the other hand I also agree that “nonzero” is too weak. Maybe there’s a middle ground using something like the word “significant”?
[For my own part intellectual honesty might make me hesitate before saying “I agree with longtermism” with the given definition — I think it may well be correct, but I’m noticeably less confident than I am in some related claims.]
I think Positly and mTurk, for people with university degrees in the US. We’ll share a proper write-up soon of the wider survey, we haven’t really looked at the data yet.
I wonder if it might be helpful to modify your claim (i) to be more similar to Hilary’s definition by referring to intrinsic value rather than mattering morally. Eg. Something like:
(i) Lives situated in the future have just as much intrinsic value as lives situated in the present
I think that wording could be improved but to me it seems like it does a better job of conveying:
“For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.”
As well as making allowance for special relationships and personal prerogatives, this also allows for the idea that the current generation holds some additional instrumental value (in enabling/affecting future generations) in addition to our intrinsic value. To me this instrumental value would have some impact on the extent to which people matter morally.
I think if you acknowledge that current populations may have some greater value (eg by virtue of instrumental value) then you would need to make claim (ii) stronger, eg. “society currently over-privileges those who live today above those who will live in the future”.
I appreciate that “matter just as much, morally” is a stronger statement (and perhaps carries some important meaning in philosophy of which I’m ignorant?). I think it also sounds nicer, which seems important for an idea that you want to have broad appeal. But perhaps its ambiguity (as I perceive it) leaves it more open to objections.
Also, FWIW I disagree with the idea that (i) could be replaced with “Those who live at future times matter morally”. It doesn’t seem strong enough and I don’t think (iii) would flow from that and (ii) as it is. So I think if you did change to this weaker version of (i) it would be even more important to make (ii) stronger.
Thanks for this! Wasn’t expecting pushback on this aspect, so that’s helpful.
I’ll start with a clarification. What I mean by that clause is: For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.
I.e. if you take some world, and move everyone around in time but keep their levels of wellbeing the same, then the new world is just as good as the old world.
(Caveat, I’m sure that there will be technical problems with this principle, so I’m just suggesting it as a first pass. This is in analogy with how I’d define the ideas that race, gender, etc are morally irrelevant: imagine two worlds where everything is the same except that people’s races or genders are different; these are equally good.)
That *does* rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
But it doesn’t rule out the following:
Special relationships. E.g. you can believe that everyone is equally valuable but, because of a special relationship you have with your children, it’s permissible for you to save your child’s life rather than two strangers. Ditto perhaps you have a special relationship with people in your own society that you’re interacting with (and perhaps have obligations of reciprocity towards).
Personal prerogatives. E.g. you can believe that $10 would do more good buying bednets than paying for yourself to go to the movies, but that it’s permissible for you to go to the movies. (Ditto perhaps for spending on present-day projects rather than entirely on long-run projects.)
If you also add the normative assumptions of agent-neutral consequentialism and expected utility theory, and the empirical assumptions that the future is big and affectable, then you do get led to strong or very strong longtermism. But, if you accept those assumptions, then strong or very strong longtermism seems correct.
I’m worried that a weakening where we just claim that future people matter, to some degree, would create too broad a church. In particular: Economists typically suppose something like a 1.5% pure rate of time preference. On this view, even people in a billion years’ time matter. But the amount by which they matter is tiny and, in practice, any effects of our actions beyond a few centuries are irrelevant. I want a definition of longtermism—even a minimal definition—to rule out that view. But then I can’t think of a non-arbitrary stopping point in between that view and my version. And I do think that there are some benefits in terms of inspiringness of my version, too — it’s clear and robust.
What would most move me is if I thought that the weaker version would capture a lot more people. And it’s interesting that Larks has had the experiences they have had.
But it currently seems to me that’s not the case. Aron Vallinder and I ran a survey on people’s attitudes on this issue: here were the results for how much people agree or disagree with the claim that ‘people in the distant future matter just as much as those alive today’:
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.
Hi Will,
I would expect the item “People in the distant future matter just as much as those alive today” to produce somewhat inflated levels of agreement. One reason is that I expect that any formulation along the lines of “X people matter just as much as Y people” will encourage agreement, because people don’t want to be seen as explicitly saying that any people matter less than others and agreement seems pretty clearly to be the socially desirable option. Another is that acquiescence bias will increase agreement levels, particularly where the proposition is one that people haven’t really considered before and/or don’t have clearly defined attitudes towards.
Me and Sanjay found pretty different results to this and, as I think he mentioned, we’ll be sharing a writeup of the results soon.
Interesting! Can’t wait. :)
(And agree that this would produce inflated levels of agreement, but feel like “do you endorse this statement?” is the relevant criterion for a definition even if that endorsement is inflated relative to action).
It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)
I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn’t rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.
Why not include this in the definition of strong longtermism, but not weak longtermism?
Having longtermism just mean “caring a lot about the long-term future” seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you’re going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, “begs the question”).
When most people first hear the term longtermism, they’re going to hear it in conversation or see it in writing without the definition attached to it. And they are going to assume it means caring a lot about the long-term future. So why define it to mean anything other than that?
On the other hand, anyone who comes across strong longtermism, is much more likely to realize that it’s a very specific technical term, so it seems much more natural to attach a very specific definition to it.
IMHO the most natural name for “people at any time have equal value” should be something like temporal indifference, which more directly suggests that meaning.
Edit: I retract temporal indifference in favor of Holly Elmore’s suggestion of temporal cosmopolitanism.
I agree with the sentiment that clause (i) is stronger than it needs to be. I don’t really think this is because it would be good to include other well-specified positions like exponential discounting, though. It’s more that it’s taking a strong position, and that position isn’t necessary for the work we want the term to do. On the other hand I also agree that “nonzero” is too weak. Maybe there’s a middle ground using something like the word “significant”?
[For my own part intellectual honesty might make me hesitate before saying “I agree with longtermism” with the given definition — I think it may well be correct, but I’m noticeably less confident than I am in some related claims.]
“Aron Vallinder and I ran a survey on people’s attitudes on this issue…”
Hey Will — who was this a survey of?
I think Positly and mTurk, for people with university degrees in the US. We’ll share a proper write-up soon of the wider survey, we haven’t really looked at the data yet.
I wonder if it might be helpful to modify your claim (i) to be more similar to Hilary’s definition by referring to intrinsic value rather than mattering morally. Eg. Something like:
(i) Lives situated in the future have just as much intrinsic value as lives situated in the present
I think that wording could be improved but to me it seems like it does a better job of conveying:
“For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.”
As well as making allowance for special relationships and personal prerogatives, this also allows for the idea that the current generation holds some additional instrumental value (in enabling/affecting future generations) in addition to our intrinsic value. To me this instrumental value would have some impact on the extent to which people matter morally.
I think if you acknowledge that current populations may have some greater value (eg by virtue of instrumental value) then you would need to make claim (ii) stronger, eg. “society currently over-privileges those who live today above those who will live in the future”.
I appreciate that “matter just as much, morally” is a stronger statement (and perhaps carries some important meaning in philosophy of which I’m ignorant?). I think it also sounds nicer, which seems important for an idea that you want to have broad appeal. But perhaps its ambiguity (as I perceive it) leaves it more open to objections.
Also, FWIW I disagree with the idea that (i) could be replaced with “Those who live at future times matter morally”. It doesn’t seem strong enough and I don’t think (iii) would flow from that and (ii) as it is. So I think if you did change to this weaker version of (i) it would be even more important to make (ii) stronger.