Curious how much you would agree with a statement like:
If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the “perfect information on expected value” is a strange claim to make.
But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.
To give another analogy I think there is a reasonable long-termist equivalent of statements like:
Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.
This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.
Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI’s goals, this paper is about engaging philosophy academics rather than producing any novel insight.
(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)
@ Ben_Chugg
Curious how much you would agree with a statement like:
If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the “perfect information on expected value” is a strange claim to make.
But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.
To give another analogy I think there is a reasonable long-termist equivalent of statements like:
Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.
This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.
Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI’s goals, this paper is about engaging philosophy academics rather than producing any novel insight.
(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)