Your point about time preference is an important one, and I think you’re right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.

On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we’re concerned. But can’t the longtermist make the same response? Imagine they said: ‘For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we’re concered. The outcome space about which we’re concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.’

And, in any case, it seems like Vaden’s point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.

They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0.

Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we’re talking about in expectation).

Note I feel fine about people saying of lots of activities “gee I haven’t thought about that one enough, I really don’t know which way it will come out”, but I think it’s a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.

And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked

The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn’t tell you that you must assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms.

It may be an interesting shift in focus to consider where you would be ambivalent between betting for or against the proposition that “>= 10^24 people exist in the future”, since, above, you reason only about taking and not laying a billion to one odds. An inability to find such a value might cast doubt on the usefulness of probability values here.

(1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.

I don’t believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic. Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I’m willing to take both sides of both bets at the same odds.

Correct me if I’m wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don’t obey laws of probability, as above). So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.

If you set your own set of self-consistent assumptions for reasoning about future worlds, I’m not sure how to bridge the gap. We might debate the reasonableness of assumptions or priors that go into our thinking. We might negotiate odds at which we would bet on “>= 10^24 people exist in the future”, with our far-future progeny transferring $ based on the outcome, but I see no way of objectively resolving who is making a “better bet” at the moment

Thanks!

Your point about time preference is an important one, and I think you’re right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions.

On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we’re concerned. But can’t the longtermist make the same response? Imagine they said: ‘For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we’re concered. The outcome space about which we’re concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.’

And, in any case, it seems like Vaden’s point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.

Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we’re talking about in expectation).

Note I feel fine about people saying of lots of activities “gee I haven’t thought about that one enough, I really don’t know which way it will come out”, but I think it’s a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.

The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn’t tell you that you

mustassign probabilities, but if you do and are willing to bet on them, they must beconsistentwith probability axioms.It may be an interesting shift in focus to consider where you would be ambivalent between betting

fororagainstthe proposition that “>= 10^24 people exist in the future”, since, above, you reason only abouttakingand notlayinga billion to one odds. An inability to find such a value might cast doubt on the usefulness of probability values here.I don’t believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic. Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I’m willing to take both sides of both bets at the same odds.

Correct me if I’m wrong, but I understand a Dutch-book to be taking advantage of

my owninconsistent credences (which don’t obey laws of probability, as above). So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.If you set your own set of self-consistent assumptions for reasoning about future worlds, I’m not sure how to bridge the gap. We might debate the reasonableness of assumptions or priors that go into our thinking. We might negotiate odds at which we would bet on “>= 10^24 people exist in the future”, with our far-future progeny transferring $ based on the outcome, but I see no way of

objectivelyresolving who is making a “better bet” at the moment