I would wonder if we might consider weakening this a little:
(i) Those who live at future times matter just as much, morally, as those who live today;
Anecdotally, it seems that many people—even people I’ve spoken to at EA events! - consider future generations to have zero value. Caring any amount about future people at all is already a significant divergence, and I would instinctively say that someone who cared about the indefinite future, but applied a modest discount factor, was also longtermist, in the colloquial-EA sense of the word.
I second weakening the definition. As someone who cares deeply about future generations, I think it is infeasible to value them equally to people today in terms of actual actions. I sketched out an optimal mitigation path for asteroid/comet impact. Just valuing the present generation in one country, we should do alternate foods. Valuing the present world, we should do asteroid detection/deflection. Once you value hundreds of future generations, we should add in food storage and comet detection/deflection, costing many trillions of dollars. But if you value even further in the future, we should take even more extreme measures, like many redundancies. And this is for a very small risk compared to things like nuclear winter and AGI. Furthermore, even if one does discount future generations, if you think we could have many computer consciousnesses in only a century or so, again we should be donating huge amount of resources for reducing even small risks. I guess one way of valuing future generations equally to the present generation is to value each generation an infinitesimal amount, but that doesn’t seem right.
Is the argument here something along the lines of; I find that I don’t want to struggle to do what these values would demand, so they must not be my values?
I hope I’m not seeing an aversion to surprising conclusions in moral reasoning. Science surprises us often, but it keeps getting closer to the truth. Technology surprises us all of the time, but it keeps getting more effective. If you wont accept any sort of surprise in the domain of applied morality, your praxis is not going to end up being very good.
Thanks for your comment. I think my concern is basically addressed by Will’s comment below. That is it is good to value everyone equally. However, it is not required in our daily actions to value a random person alive today is much as ourselves or a random person in the future as much as ourselves. That is, it is permissible to have some special relationships and have some personal prerogatives.
Thanks for this! Wasn’t expecting pushback on this aspect, so that’s helpful.
I’ll start with a clarification. What I mean by that clause is: For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.
I.e. if you take some world, and move everyone around in time but keep their levels of wellbeing the same, then the new world is just as good as the old world.
(Caveat, I’m sure that there will be technical problems with this principle, so I’m just suggesting it as a first pass. This is in analogy with how I’d define the ideas that race, gender, etc are morally irrelevant: imagine two worlds where everything is the same except that people’s races or genders are different; these are equally good.)
That *does* rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
But it doesn’t rule out the following:
Special relationships. E.g. you can believe that everyone is equally valuable but, because of a special relationship you have with your children, it’s permissible for you to save your child’s life rather than two strangers. Ditto perhaps you have a special relationship with people in your own society that you’re interacting with (and perhaps have obligations of reciprocity towards).
Personal prerogatives. E.g. you can believe that $10 would do more good buying bednets than paying for yourself to go to the movies, but that it’s permissible for you to go to the movies. (Ditto perhaps for spending on present-day projects rather than entirely on long-run projects.)
If you also add the normative assumptions of agent-neutral consequentialism and expected utility theory, and the empirical assumptions that the future is big and affectable, then you do get led to strong or very strong longtermism. But, if you accept those assumptions, then strong or very strong longtermism seems correct.
I’m worried that a weakening where we just claim that future people matter, to some degree, would create too broad a church. In particular: Economists typically suppose something like a 1.5% pure rate of time preference. On this view, even people in a billion years’ time matter. But the amount by which they matter is tiny and, in practice, any effects of our actions beyond a few centuries are irrelevant. I want a definition of longtermism—even a minimal definition—to rule out that view. But then I can’t think of a non-arbitrary stopping point in between that view and my version. And I do think that there are some benefits in terms of inspiringness of my version, too — it’s clear and robust.
What would most move me is if I thought that the weaker version would capture a lot more people. And it’s interesting that Larks has had the experiences they have had.
But it currently seems to me that’s not the case. Aron Vallinder and I ran a survey on people’s attitudes on this issue: here were the results for how much people agree or disagree with the claim that ‘people in the distant future matter just as much as those alive today’:
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.
I would expect the item “People in the distant future matter just as much as those alive today” to produce somewhat inflated levels of agreement. One reason is that I expect that any formulation along the lines of “X people matter just as much as Y people” will encourage agreement, because people don’t want to be seen as explicitly saying that any people matter less than others and agreement seems pretty clearly to be the socially desirable option. Another is that acquiescence bias will increase agreement levels, particularly where the proposition is one that people haven’t really considered before and/or don’t have clearly defined attitudes towards.
Me and Sanjay found pretty different results to this and, as I think he mentioned, we’ll be sharing a writeup of the results soon.
(And agree that this would produce inflated levels of agreement, but feel like “do you endorse this statement?” is the relevant criterion for a definition even if that endorsement is inflated relative to action).
That does rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)
I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn’t rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.
Why not include this in the definition of strong longtermism, but not weak longtermism?
Having longtermism just mean “caring a lot about the long-term future” seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you’re going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, “begs the question”).
When most people first hear the term longtermism, they’re going to hear it in conversation or see it in writing without the definition attached to it. And they are going to assume it means caring a lot about the long-term future. So why define it to mean anything other than that?
On the other hand, anyone who comes across strong longtermism, is much more likely to realize that it’s a very specific technical term, so it seems much more natural to attach a very specific definition to it.
IMHO the most natural name for “people at any time have equal value” should be something like temporal indifference, which more directly suggests that meaning.
Edit: I retract temporal indifference in favor of Holly Elmore’s suggestion of temporal cosmopolitanism.
I agree with the sentiment that clause (i) is stronger than it needs to be. I don’t really think this is because it would be good to include other well-specified positions like exponential discounting, though. It’s more that it’s taking a strong position, and that position isn’t necessary for the work we want the term to do. On the other hand I also agree that “nonzero” is too weak. Maybe there’s a middle ground using something like the word “significant”?
[For my own part intellectual honesty might make me hesitate before saying “I agree with longtermism” with the given definition — I think it may well be correct, but I’m noticeably less confident than I am in some related claims.]
I think Positly and mTurk, for people with university degrees in the US. We’ll share a proper write-up soon of the wider survey, we haven’t really looked at the data yet.
I wonder if it might be helpful to modify your claim (i) to be more similar to Hilary’s definition by referring to intrinsic value rather than mattering morally. Eg. Something like:
(i) Lives situated in the future have just as much intrinsic value as lives situated in the present
I think that wording could be improved but to me it seems like it does a better job of conveying:
“For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.”
As well as making allowance for special relationships and personal prerogatives, this also allows for the idea that the current generation holds some additional instrumental value (in enabling/affecting future generations) in addition to our intrinsic value. To me this instrumental value would have some impact on the extent to which people matter morally.
I think if you acknowledge that current populations may have some greater value (eg by virtue of instrumental value) then you would need to make claim (ii) stronger, eg. “society currently over-privileges those who live today above those who will live in the future”.
I appreciate that “matter just as much, morally” is a stronger statement (and perhaps carries some important meaning in philosophy of which I’m ignorant?). I think it also sounds nicer, which seems important for an idea that you want to have broad appeal. But perhaps its ambiguity (as I perceive it) leaves it more open to objections.
Also, FWIW I disagree with the idea that (i) could be replaced with “Those who live at future times matter morally”. It doesn’t seem strong enough and I don’t think (iii) would flow from that and (ii) as it is. So I think if you did change to this weaker version of (i) it would be even more important to make (ii) stronger.
Whether creating a person with a positive quality of life is bestowing a benefit on them (which is controversial)
Whether affecting the welfare of someone in the future matters morally (not really controversial)
The minimal definition should probably include something like 2, but not 1. I think the current definition leaves it somewhat ambiguous, although I’m inclined to interpret it as my 2). I’d be surprised if you think 2 is controversial.
It is not that easy to distinguish between these two theories! Consider three worlds:
Sam exists with welfare 20
Sam does not exist
Sam exists with welfare 30
If you don’t value creating positive people, you end up being indifferent between the first and second worlds, and between the second and third worlds… But then by 2), you want to prefer the third to the first, suggesting a violation of transitivity.
The (normal) person-affecting response here is to say that options 1 and 3 and incomparable in value to 2 - existence is neither better than, worse than, or equally good as, non-existence for someone. However, if Sam exists necessarily, then 2 isn’t a option, so then we say 3 is better than 1. Hence, no issues with transitivity.
Well that doesn’t show it’s hard to distinguish between the views, it just shows a major problem for person-affecting views that want to hold 2) but not 1).
Thanks for writing this; I thought it was good.
I would wonder if we might consider weakening this a little:
Anecdotally, it seems that many people—even people I’ve spoken to at EA events! - consider future generations to have zero value. Caring any amount about future people at all is already a significant divergence, and I would instinctively say that someone who cared about the indefinite future, but applied a modest discount factor, was also longtermist, in the colloquial-EA sense of the word.
I second weakening the definition. As someone who cares deeply about future generations, I think it is infeasible to value them equally to people today in terms of actual actions. I sketched out an optimal mitigation path for asteroid/comet impact. Just valuing the present generation in one country, we should do alternate foods. Valuing the present world, we should do asteroid detection/deflection. Once you value hundreds of future generations, we should add in food storage and comet detection/deflection, costing many trillions of dollars. But if you value even further in the future, we should take even more extreme measures, like many redundancies. And this is for a very small risk compared to things like nuclear winter and AGI. Furthermore, even if one does discount future generations, if you think we could have many computer consciousnesses in only a century or so, again we should be donating huge amount of resources for reducing even small risks. I guess one way of valuing future generations equally to the present generation is to value each generation an infinitesimal amount, but that doesn’t seem right.
Is the argument here something along the lines of; I find that I don’t want to struggle to do what these values would demand, so they must not be my values?
I hope I’m not seeing an aversion to surprising conclusions in moral reasoning. Science surprises us often, but it keeps getting closer to the truth. Technology surprises us all of the time, but it keeps getting more effective. If you wont accept any sort of surprise in the domain of applied morality, your praxis is not going to end up being very good.
Thanks for your comment. I think my concern is basically addressed by Will’s comment below. That is it is good to value everyone equally. However, it is not required in our daily actions to value a random person alive today is much as ourselves or a random person in the future as much as ourselves. That is, it is permissible to have some special relationships and have some personal prerogatives.
Thanks for this! Wasn’t expecting pushback on this aspect, so that’s helpful.
I’ll start with a clarification. What I mean by that clause is: For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.
I.e. if you take some world, and move everyone around in time but keep their levels of wellbeing the same, then the new world is just as good as the old world.
(Caveat, I’m sure that there will be technical problems with this principle, so I’m just suggesting it as a first pass. This is in analogy with how I’d define the ideas that race, gender, etc are morally irrelevant: imagine two worlds where everything is the same except that people’s races or genders are different; these are equally good.)
That *does* rule out pure time-discounting, which Haydn suggests. But I’d need quite a lot of convincing to allow that into a definition of longtermism. (The strongest case I could see would be if spatiotemporal discounting is the best solution to problems of infinite ethics.)
But it doesn’t rule out the following:
Special relationships. E.g. you can believe that everyone is equally valuable but, because of a special relationship you have with your children, it’s permissible for you to save your child’s life rather than two strangers. Ditto perhaps you have a special relationship with people in your own society that you’re interacting with (and perhaps have obligations of reciprocity towards).
Personal prerogatives. E.g. you can believe that $10 would do more good buying bednets than paying for yourself to go to the movies, but that it’s permissible for you to go to the movies. (Ditto perhaps for spending on present-day projects rather than entirely on long-run projects.)
If you also add the normative assumptions of agent-neutral consequentialism and expected utility theory, and the empirical assumptions that the future is big and affectable, then you do get led to strong or very strong longtermism. But, if you accept those assumptions, then strong or very strong longtermism seems correct.
I’m worried that a weakening where we just claim that future people matter, to some degree, would create too broad a church. In particular: Economists typically suppose something like a 1.5% pure rate of time preference. On this view, even people in a billion years’ time matter. But the amount by which they matter is tiny and, in practice, any effects of our actions beyond a few centuries are irrelevant. I want a definition of longtermism—even a minimal definition—to rule out that view. But then I can’t think of a non-arbitrary stopping point in between that view and my version. And I do think that there are some benefits in terms of inspiringness of my version, too — it’s clear and robust.
What would most move me is if I thought that the weaker version would capture a lot more people. And it’s interesting that Larks has had the experiences they have had.
But it currently seems to me that’s not the case. Aron Vallinder and I ran a survey on people’s attitudes on this issue: here were the results for how much people agree or disagree with the claim that ‘people in the distant future matter just as much as those alive today’:
Given this, I’m inclined to stick with the stronger version — it already has broad appeal, and has some advantages over the weaker version.
Hi Will,
I would expect the item “People in the distant future matter just as much as those alive today” to produce somewhat inflated levels of agreement. One reason is that I expect that any formulation along the lines of “X people matter just as much as Y people” will encourage agreement, because people don’t want to be seen as explicitly saying that any people matter less than others and agreement seems pretty clearly to be the socially desirable option. Another is that acquiescence bias will increase agreement levels, particularly where the proposition is one that people haven’t really considered before and/or don’t have clearly defined attitudes towards.
Me and Sanjay found pretty different results to this and, as I think he mentioned, we’ll be sharing a writeup of the results soon.
Interesting! Can’t wait. :)
(And agree that this would produce inflated levels of agreement, but feel like “do you endorse this statement?” is the relevant criterion for a definition even if that endorsement is inflated relative to action).
It seems quite plausible to me (based on intuitions from algorithmic complexity theory) that spatiotemporal discounting is the best solution to problems of infinite ethics. (See Anatomy of Multiversal Utility Functions: Tegmark Level IV for a specific proposal in this vein.)
I think the kinds of discounting suggested by algorithmic information theory is mild enough in practice to be compatible with our intuitive notions of longtermism (e.g., the discount factors for current spacetime and a billion years from now are almost the same), and would prefer a definition that doesn’t rule them out, in case we later determine that the correct solution to infinite ethics does indeed lie in that direction.
Why not include this in the definition of strong longtermism, but not weak longtermism?
Having longtermism just mean “caring a lot about the long-term future” seems the most natural and least likely to cause confusion. I think for it to mean anything other than that, you’re going to have to keep beating people over the head with the definition (analogous to the sorry state of the phrase, “begs the question”).
When most people first hear the term longtermism, they’re going to hear it in conversation or see it in writing without the definition attached to it. And they are going to assume it means caring a lot about the long-term future. So why define it to mean anything other than that?
On the other hand, anyone who comes across strong longtermism, is much more likely to realize that it’s a very specific technical term, so it seems much more natural to attach a very specific definition to it.
IMHO the most natural name for “people at any time have equal value” should be something like temporal indifference, which more directly suggests that meaning.
Edit: I retract temporal indifference in favor of Holly Elmore’s suggestion of temporal cosmopolitanism.
I agree with the sentiment that clause (i) is stronger than it needs to be. I don’t really think this is because it would be good to include other well-specified positions like exponential discounting, though. It’s more that it’s taking a strong position, and that position isn’t necessary for the work we want the term to do. On the other hand I also agree that “nonzero” is too weak. Maybe there’s a middle ground using something like the word “significant”?
[For my own part intellectual honesty might make me hesitate before saying “I agree with longtermism” with the given definition — I think it may well be correct, but I’m noticeably less confident than I am in some related claims.]
“Aron Vallinder and I ran a survey on people’s attitudes on this issue…”
Hey Will — who was this a survey of?
I think Positly and mTurk, for people with university degrees in the US. We’ll share a proper write-up soon of the wider survey, we haven’t really looked at the data yet.
I wonder if it might be helpful to modify your claim (i) to be more similar to Hilary’s definition by referring to intrinsic value rather than mattering morally. Eg. Something like:
(i) Lives situated in the future have just as much intrinsic value as lives situated in the present
I think that wording could be improved but to me it seems like it does a better job of conveying:
“For any populations of people p, and any permutation with respect to time of that population that keeps the wellbeing levels of all individuals the same, p’, p and p’ are equally good.”
As well as making allowance for special relationships and personal prerogatives, this also allows for the idea that the current generation holds some additional instrumental value (in enabling/affecting future generations) in addition to our intrinsic value. To me this instrumental value would have some impact on the extent to which people matter morally.
I think if you acknowledge that current populations may have some greater value (eg by virtue of instrumental value) then you would need to make claim (ii) stronger, eg. “society currently over-privileges those who live today above those who will live in the future”.
I appreciate that “matter just as much, morally” is a stronger statement (and perhaps carries some important meaning in philosophy of which I’m ignorant?). I think it also sounds nicer, which seems important for an idea that you want to have broad appeal. But perhaps its ambiguity (as I perceive it) leaves it more open to objections.
Also, FWIW I disagree with the idea that (i) could be replaced with “Those who live at future times matter morally”. It doesn’t seem strong enough and I don’t think (iii) would flow from that and (ii) as it is. So I think if you did change to this weaker version of (i) it would be even more important to make (ii) stronger.
Well, we should probably distinguish between:
Whether creating a person with a positive quality of life is bestowing a benefit on them (which is controversial)
Whether affecting the welfare of someone in the future matters morally (not really controversial)
The minimal definition should probably include something like 2, but not 1. I think the current definition leaves it somewhat ambiguous, although I’m inclined to interpret it as my 2). I’d be surprised if you think 2 is controversial.
It is not that easy to distinguish between these two theories! Consider three worlds:
Sam exists with welfare 20
Sam does not exist
Sam exists with welfare 30
If you don’t value creating positive people, you end up being indifferent between the first and second worlds, and between the second and third worlds… But then by 2), you want to prefer the third to the first, suggesting a violation of transitivity.
The (normal) person-affecting response here is to say that options 1 and 3 and incomparable in value to 2 - existence is neither better than, worse than, or equally good as, non-existence for someone. However, if Sam exists necessarily, then 2 isn’t a option, so then we say 3 is better than 1. Hence, no issues with transitivity.
Well that doesn’t show it’s hard to distinguish between the views, it just shows a major problem for person-affecting views that want to hold 2) but not 1).
You mean, shows a major finding, no? :)