Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.
First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1⁄6 to the hypothesis that the die lands on 1.
Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.
Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.
And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked and accuracy-dominated.
Suppose, then, that you accept that we must assign probabilities to the relevant hypotheses. Greaves and MacAskill’s point is that all reasonable-sounding probability assignments imply that we ought to pursue longtermist interventions (given that we accept their moral premise, which I discuss later). Consider, for example, the hypothesis that humanity spreads into space and that 10^24 people exist in the future. What probability assignment to this hypothesis sounds reasonable? Opinions will differ to some extent, but it seems extremely overconfident to assign this hypothesis a probability of less than one in one billion. On a standard view about the relationship between probabilities and rational action, that would imply a willingness to stake £1 billion on the hypothesis, losing it all if the hypothesis turns out true and winning an extra £2 if the hypothesis turns out false (assuming, for illustration’s sake only, that utility is linear with respect to money across this interval).
The case is the same with other empirical hypotheses that Greaves and MacAskill consider. To get the result that longtermist interventions don’t maximise expected value, you have to make all kinds of overconfident-sounding probability assignments, like ‘I am almost certain that humanity will not spread to the stars,’ ‘I am almost certain that smart, well-motivated people with billions of pounds of resources would not reduce extinction risk by even 0.00001%,’ ‘I am almost certain that billions of pounds of resources devoted to further research on longtermism would not unearth a viable longtermist intervention,’ etc. So, as it turns out, accepting longtermism does not commit us to strong claims about what the future will be like. Instead, it is denying longtermism that commits us to such claims.
So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.
Now, this final sentence is conditional on the truth of Greaves and MacAskill’s moral premises. In particular, it depends on their claim that we ought to have a zero rate of pure time preference.
The first thing to note is that the word ‘pure’ is important here. As you point out, ‘we should be biased towards the present for the simple reason that tomorrow may not arrive.’ Greaves and MacAskill would agree. Longtermists incorporate this factor in their arguments, and it does not change their conclusions. Ord calls it ‘discounting for the catastrophe rate’ in The Precipice, and you can read more about the role it plays there.
When Greaves and MacAskill claim that we ought to have a zero rate of pure time preference, they are claiming that we ought not care less about consequences purely because they occur later in time. This pattern of caring really does seem indefensible. Suppose, for example, that a villain has set a time-bomb in an elementary school classroom. You initially think it is set to go off in a year’s time, and you are horrified. In a year’s time, 30 children will die. Suppose that the villain then tells you that they’ve set the bomb to go off in ten years’ time. In ten years’ time, 30 children will die. Are you now less horrified? If you had a positive rate of pure time preference, you would be. But that seems absurd.
As Ord points out, positive rates of pure time preference seem even less defensible when we consider longer time scales: ‘At a rate of pure time preference of 1 percent, a single death in 6,000 years’ time would be vastly more important than a billion deaths in 9,000 years. And King Tutankhamun would have been obliged to value a single day of suffering in the life of one of his contemporaries as more important than a lifetime of suffering for all 7.7 billion people alive today.’
Thanks again for the post! It’s good to see longtermism getting some critical examination.
Hi Elliott, just a few side comments from someone sympathetic to Vaden’s critique:
I largely agree with your take on time preference. One thing I’d like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there’s often a move made where people say “in theory we should have a zero discount factor, so let’s focus on the future!”. But the conclusion ignores that in practice we never have such unconditional knowledge of the future.
Re: the dice example:
First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1⁄6 to the hypothesis that the die lands on 1.
True—there are infinitely many things that can happen while the die is in the air, but that’s not the outcome space about which we’re concerned. We’re concerned about the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they’re defined by us, because we’ve chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we’re trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool—sometimes it’s useful, sometimes it’s not). We can’t even write out the outcome space in this scenario, let alone define a probability measure over it.
So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.
Once you buy into the idea that you must quantify all your beliefs with numbers, then yes—you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary—again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this brieflyhere.) Anyway, the Bayesianism debate is a much longer one (one that I think the community needs to have, however), so I won’t yell about any longer, but I do want to emphasize that it is only one way to reason about the world (and leads to many paradoxes and inconsistencies, as you all know).
Your point about time preference is an important one, and I think you’re right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.
On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we’re concerned. But can’t the longtermist make the same response? Imagine they said: ‘For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we’re concered. The outcome space about which we’re concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.’
And, in any case, it seems like Vaden’s point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.
They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0.
Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we’re talking about in expectation).
Note I feel fine about people saying of lots of activities “gee I haven’t thought about that one enough, I really don’t know which way it will come out”, but I think it’s a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.
And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked
The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn’t tell you that you must assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms.
It may be an interesting shift in focus to consider where you would be ambivalent between betting for or against the proposition that “>= 10^24 people exist in the future”, since, above, you reason only about taking and not laying a billion to one odds. An inability to find such a value might cast doubt on the usefulness of probability values here.
(1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.
I don’t believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic. Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I’m willing to take both sides of both bets at the same odds.
Correct me if I’m wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don’t obey laws of probability, as above). So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.
If you set your own set of self-consistent assumptions for reasoning about future worlds, I’m not sure how to bridge the gap. We might debate the reasonableness of assumptions or priors that go into our thinking. We might negotiate odds at which we would bet on “>= 10^24 people exist in the future”, with our far-future progeny transferring $ based on the outcome, but I see no way of objectively resolving who is making a “better bet” at the moment
I think the probability of these events regardless of our influence is not what matters; it’s our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don’t go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.
I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That’s what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn’t rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.
Hi Vaden,
Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.
First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1⁄6 to the hypothesis that the die lands on 1.
Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.
Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.
And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked and accuracy-dominated.
Suppose, then, that you accept that we must assign probabilities to the relevant hypotheses. Greaves and MacAskill’s point is that all reasonable-sounding probability assignments imply that we ought to pursue longtermist interventions (given that we accept their moral premise, which I discuss later). Consider, for example, the hypothesis that humanity spreads into space and that 10^24 people exist in the future. What probability assignment to this hypothesis sounds reasonable? Opinions will differ to some extent, but it seems extremely overconfident to assign this hypothesis a probability of less than one in one billion. On a standard view about the relationship between probabilities and rational action, that would imply a willingness to stake £1 billion on the hypothesis, losing it all if the hypothesis turns out true and winning an extra £2 if the hypothesis turns out false (assuming, for illustration’s sake only, that utility is linear with respect to money across this interval).
The case is the same with other empirical hypotheses that Greaves and MacAskill consider. To get the result that longtermist interventions don’t maximise expected value, you have to make all kinds of overconfident-sounding probability assignments, like ‘I am almost certain that humanity will not spread to the stars,’ ‘I am almost certain that smart, well-motivated people with billions of pounds of resources would not reduce extinction risk by even 0.00001%,’ ‘I am almost certain that billions of pounds of resources devoted to further research on longtermism would not unearth a viable longtermist intervention,’ etc. So, as it turns out, accepting longtermism does not commit us to strong claims about what the future will be like. Instead, it is denying longtermism that commits us to such claims.
So, to summarise the above, we have to assign probabilities to empirical hypotheses, on pain of getting Dutch-booked and accuracy-dominated. And all reasonable-seeming probability assignments imply that we should pursue longtermist interventions.
Now, this final sentence is conditional on the truth of Greaves and MacAskill’s moral premises. In particular, it depends on their claim that we ought to have a zero rate of pure time preference.
The first thing to note is that the word ‘pure’ is important here. As you point out, ‘we should be biased towards the present for the simple reason that tomorrow may not arrive.’ Greaves and MacAskill would agree. Longtermists incorporate this factor in their arguments, and it does not change their conclusions. Ord calls it ‘discounting for the catastrophe rate’ in The Precipice, and you can read more about the role it plays there.
When Greaves and MacAskill claim that we ought to have a zero rate of pure time preference, they are claiming that we ought not care less about consequences purely because they occur later in time. This pattern of caring really does seem indefensible. Suppose, for example, that a villain has set a time-bomb in an elementary school classroom. You initially think it is set to go off in a year’s time, and you are horrified. In a year’s time, 30 children will die. Suppose that the villain then tells you that they’ve set the bomb to go off in ten years’ time. In ten years’ time, 30 children will die. Are you now less horrified? If you had a positive rate of pure time preference, you would be. But that seems absurd.
As Ord points out, positive rates of pure time preference seem even less defensible when we consider longer time scales: ‘At a rate of pure time preference of 1 percent, a single death in 6,000 years’ time would be vastly more important than a billion deaths in 9,000 years. And King Tutankhamun would have been obliged to value a single day of suffering in the life of one of his contemporaries as more important than a lifetime of suffering for all 7.7 billion people alive today.’
Thanks again for the post! It’s good to see longtermism getting some critical examination.
Hi Elliott, just a few side comments from someone sympathetic to Vaden’s critique:
I largely agree with your take on time preference. One thing I’d like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there’s often a move made where people say “in theory we should have a zero discount factor, so let’s focus on the future!”. But the conclusion ignores that in practice we never have such unconditional knowledge of the future.
Re: the dice example:
True—there are infinitely many things that can happen while the die is in the air, but that’s not the outcome space about which we’re concerned. We’re concerned about the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they’re defined by us, because we’ve chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we’re trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool—sometimes it’s useful, sometimes it’s not). We can’t even write out the outcome space in this scenario, let alone define a probability measure over it.
Once you buy into the idea that you must quantify all your beliefs with numbers, then yes—you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary—again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this briefly here.) Anyway, the Bayesianism debate is a much longer one (one that I think the community needs to have, however), so I won’t yell about any longer, but I do want to emphasize that it is only one way to reason about the world (and leads to many paradoxes and inconsistencies, as you all know).
Appreciate your engagement :)
Thanks!
Your point about time preference is an important one, and I think you’re right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.
On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we’re concerned. But can’t the longtermist make the same response? Imagine they said: ‘For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we’re concered. The outcome space about which we’re concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.’
And, in any case, it seems like Vaden’s point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.
Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we’re talking about in expectation).
Note I feel fine about people saying of lots of activities “gee I haven’t thought about that one enough, I really don’t know which way it will come out”, but I think it’s a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.
The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn’t tell you that you must assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms.
It may be an interesting shift in focus to consider where you would be ambivalent between betting for or against the proposition that “>= 10^24 people exist in the future”, since, above, you reason only about taking and not laying a billion to one odds. An inability to find such a value might cast doubt on the usefulness of probability values here.
I don’t believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic. Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I’m willing to take both sides of both bets at the same odds.
Correct me if I’m wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don’t obey laws of probability, as above). So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.
If you set your own set of self-consistent assumptions for reasoning about future worlds, I’m not sure how to bridge the gap. We might debate the reasonableness of assumptions or priors that go into our thinking. We might negotiate odds at which we would bet on “>= 10^24 people exist in the future”, with our far-future progeny transferring $ based on the outcome, but I see no way of objectively resolving who is making a “better bet” at the moment
I think the probability of these events regardless of our influence is not what matters; it’s our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don’t go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough.
I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That’s what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn’t rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.