I think the assumption that expected value calculations are the correct decision making tool is incorrect, (as per others at GPI like Owen’s work and Andreas’ work, bounded rationality, the entire field of risk management, economists like Taleb, knightian uncertainty, etc. etc) . A charitable reading would say that they recognises this is an assumption but chooses not to address it.
Hmm perhaps you need to read the paper again. They say for example:
In sections 2 and 3, we will at times help ourselves to some popular but controversial axiological and decision-theoretic assumptions (specifically, total utilitarianism and expected utility theory). This, however, is mainly for elegance of exposition. Section 4 conducts the corresponding sensitivity analyses, and argues that plausible ways of deviating from these assumptions are unlikely to undermine the argument
Indeed they go on in section 4.5 to consider other decision theories, including Knightian uncertainty, and conclude that strong longtermism is robust to these other theories. I’m not saying they’re definitely right, just that they haven’t assumed expected value theory is correct as you claim.
So, my initial reading of 4.5 was that they get it very very wrong.
Eg: “we assumed that the correct way to evaluate options in ex ante axiological terms, under conditions of uncertainty, is in terms of expected value”. Any of the points above would disagree with this.
Eg: “[Knightian uncertainty] supports, rather than undermining, axiological strong longtermism”. This is just not true. Some Knightian uncertainty methods would support (eg robust decision making) and some would not support (eg plan-and-adapt).
So why does it look like they get this so wrong?
Maybe they are trying to achieve something different from what we in this thread think they are trying to achieve.
My analysis of their analysis of Knightian uncertainty can shed some light here.
The point of Knightian (or deep) uncertainty tools is that an expected value calculation is the wrong tool for humans to use when making decisions under Knightian uncertainty. That an expected value calculation, as a decision tool it will not lead to the best outcome, the outcome with the highest true expected value. [Note: I use true expected value to mean the expected value if there was no uncertainty, which can be different from the calculated expected value.] The aim is still the same (to maximise true expected value) but the approach is different. Why the different approach – because in practice expected value calculations do not work well – they lead to anchoring, lead to unknown unknows being ignored, are super sensitive to speculation, etc, etc. The tools used are varied but include tactics such as encouraging decision makers to aim for an option that is satisficing (least bad) on a variety of domains rather than maximising (this specific tool is to minimise the risk of unknown unknows being ignored).
But when Will+Hillary explain Knightian uncertainty they explain it as if it is posing a fundamental axiological difference. As if aiming for the least bad option is done because the least bad option is true best option (as opposed to the calculated best option, if that makes sense). This is not at all what anyone I know who uses these tools believes.
Let’s pause and note that as Knightian uncertainty tools are still aiming at guiding actors towards the true highest expected value they could theoretically be explained in terms of expected value. They don’t challenge the expected value axiology
Clearly Will+Hillary are not, in this paper, interested in if it poses an alternative methodology to reaching the true expected value, they are only interested in if it could be used to justify a different axiology. This would explain why this paper ignore all the other tools (like predict-then-act tools) focuses on this one tool and explains it in a strange way.
The case they are making (by my charitable reading) is that if we are aiming for true expected value then, because the future is so so so so so big that we should expect to be able to find at least some options that influence it and the thing that does the most good is likely to be among those options.
They chose expected value calculations as a way to illustrate this.
As Owen says here, they are “talking about how an ideal rational actor should behave – which I think is informative but not something to be directly emulate”.
They do not seem to be aiming to say anything on how to make decisions about what to focus on.
So I stand by my claim that the most charitable reading is that they are deliberately not addressing how to make decisions.
--
As far as I can tell, in layman speak, this paper tries to make the case that: If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
FWIW I expect this paper is largely correct (if the conclusion is as above). However I think could be improved in some ways:
It is opaque. Maybe it is clearer to fellow philosophers but I reached my view of what the paper was trying to achieve by looking at how they manage to mis-explain a core decision making concept two-thirds of they way through and then extrapolated the ways they could be rationally making their apparent errors. Not easy to understand what they are doing. And I think most people on this thread would have a different view to me about this paper. Would be good if a bit more text for us layfolk.
It could be misconstrued. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either poor or I really don’t get it.)
It leaves unanswered questions, but does not make it clear what those questions are. I do think it is useful to know that we should expect the most high impact actions to be those that have long run positive consequences. But how the hell should anyone actually make a decision and compare short term and long term? This paper does not help on this. It could maybe highlight the need to research this.
Is is a weak argument. It is plausible to me that alternative decision making tools might confuse their conclusions so much that when applied in practice by a philanthropist etc the result largely does not apply.
For example one could believe that economic growth is good for the future, that most people who try to impact the world positively without RCT-level evidence fail, that situations of high uncertainty are best resolved though engineering short feedback loops and quite rationally conclude that AMF (bednets) is the currently charity that has the biggest positive long-run affect on the future. I don’t think this contradicts anything in the paper and I don’t think it would be unreasonable.
There are other flaws with the paper too in the more empirical part with all the examples. Eg even a very very low discount rate to account for things like extinction risk or sudden windfall really quickly reduces the amount the future matters. (Note this is different from pure time preference discounting).
In my view they overstate (or are misleading about) what they have achieved. Eg I do not think, for the reasons given, that they have at all shown that “plausible deviations from [an expected utility treatment of decision-making under uncertainty] do not undermine the core argument”. (This is only true insofar as decision-making approaches are, as far as I can tell, not at all relevant to their core argument). They have maybe shown something like: “plausible deviations from expected utility theory do not undermine the core argument”.
Curious how much you would agree with a statement like:
If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the “perfect information on expected value” is a strange claim to make.
But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.
To give another analogy I think there is a reasonable long-termist equivalent of statements like:
Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.
This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.
Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI’s goals, this paper is about engaging philosophy academics rather than producing any novel insight.
(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)
Thanks for this! All interesting and I will have to think about this more carefully when my brain is fresher. I admit I’m not very familiar with the literature on Knightian uncertainty and it would probably help if I read some more about that first.
It is misleading. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either really poor or I really don’t get it.)
OK if I understand you correctly, what you have said is that Will and Hilary present Knightian uncertainty as axiologically different to EV reasoning, when you don’t think it is. I agree with you that ideally section 4.5 should be considering some axiologically different decision-making theories to EV.
Regarding the actual EV calculations with numbers, I would say, as I did in a different comment, that I think it is prettyclear that they only carry out EV calculations for illustrative purposes. To quote:
Of course, in either case one could debate these numbers. But, to repeat, all we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence. Given the multitude of plausible ways by which one could have such influence, diverse points of view are likely to agree on this claim
This is the point they are trying to get across by doing the actual EV calculations.
Hmm perhaps you need to read the paper again. They say for example:
Indeed they go on in section 4.5 to consider other decision theories, including Knightian uncertainty, and conclude that strong longtermism is robust to these other theories. I’m not saying they’re definitely right, just that they haven’t assumed expected value theory is correct as you claim.
OK Jack, I have some time today so lets dive in:
So, my initial reading of 4.5 was that they get it very very wrong.
Eg: “we assumed that the correct way to evaluate options in ex ante axiological terms, under conditions of uncertainty, is in terms of expected value”. Any of the points above would disagree with this.
Eg: “[Knightian uncertainty] supports, rather than undermining, axiological strong longtermism”. This is just not true. Some Knightian uncertainty methods would support (eg robust decision making) and some would not support (eg plan-and-adapt).
So why does it look like they get this so wrong?
Maybe they are trying to achieve something different from what we in this thread think they are trying to achieve.
My analysis of their analysis of Knightian uncertainty can shed some light here.
The point of Knightian (or deep) uncertainty tools is that an expected value calculation is the wrong tool for humans to use when making decisions under Knightian uncertainty. That an expected value calculation, as a decision tool it will not lead to the best outcome, the outcome with the highest true expected value. [Note: I use true expected value to mean the expected value if there was no uncertainty, which can be different from the calculated expected value.] The aim is still the same (to maximise true expected value) but the approach is different. Why the different approach – because in practice expected value calculations do not work well – they lead to anchoring, lead to unknown unknows being ignored, are super sensitive to speculation, etc, etc. The tools used are varied but include tactics such as encouraging decision makers to aim for an option that is satisficing (least bad) on a variety of domains rather than maximising (this specific tool is to minimise the risk of unknown unknows being ignored).
But when Will+Hillary explain Knightian uncertainty they explain it as if it is posing a fundamental axiological difference. As if aiming for the least bad option is done because the least bad option is true best option (as opposed to the calculated best option, if that makes sense). This is not at all what anyone I know who uses these tools believes.
Let’s pause and note that as Knightian uncertainty tools are still aiming at guiding actors towards the true highest expected value they could theoretically be explained in terms of expected value. They don’t challenge the expected value axiology
Clearly Will+Hillary are not, in this paper, interested in if it poses an alternative methodology to reaching the true expected value, they are only interested in if it could be used to justify a different axiology. This would explain why this paper ignore all the other tools (like predict-then-act tools) focuses on this one tool and explains it in a strange way.
The case they are making (by my charitable reading) is that if we are aiming for true expected value then, because the future is so so so so so big that we should expect to be able to find at least some options that influence it and the thing that does the most good is likely to be among those options.
They chose expected value calculations as a way to illustrate this.
As Owen says here, they are “talking about how an ideal rational actor should behave – which I think is informative but not something to be directly emulate”.
They do not seem to be aiming to say anything on how to make decisions about what to focus on.
So I stand by my claim that the most charitable reading is that they are deliberately not addressing how to make decisions.
--
As far as I can tell, in layman speak, this paper tries to make the case that: If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
FWIW I expect this paper is largely correct (if the conclusion is as above). However I think could be improved in some ways:
It is opaque. Maybe it is clearer to fellow philosophers but I reached my view of what the paper was trying to achieve by looking at how they manage to mis-explain a core decision making concept two-thirds of they way through and then extrapolated the ways they could be rationally making their apparent errors. Not easy to understand what they are doing. And I think most people on this thread would have a different view to me about this paper. Would be good if a bit more text for us layfolk.
It could be misconstrued. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either poor or I really don’t get it.)
It leaves unanswered questions, but does not make it clear what those questions are. I do think it is useful to know that we should expect the most high impact actions to be those that have long run positive consequences. But how the hell should anyone actually make a decision and compare short term and long term? This paper does not help on this. It could maybe highlight the need to research this.
Is is a weak argument. It is plausible to me that alternative decision making tools might confuse their conclusions so much that when applied in practice by a philanthropist etc the result largely does not apply.
For example one could believe that economic growth is good for the future, that most people who try to impact the world positively without RCT-level evidence fail, that situations of high uncertainty are best resolved though engineering short feedback loops and quite rationally conclude that AMF (bednets) is the currently charity that has the biggest positive long-run affect on the future. I don’t think this contradicts anything in the paper and I don’t think it would be unreasonable.
There are other flaws with the paper too in the more empirical part with all the examples. Eg even a very very low discount rate to account for things like extinction risk or sudden windfall really quickly reduces the amount the future matters. (Note this is different from pure time preference discounting).
In my view they overstate (or are misleading about) what they have achieved. Eg I do not think, for the reasons given, that they have at all shown that “plausible deviations from [an expected utility treatment of decision-making under uncertainty] do not undermine the core argument”. (This is only true insofar as decision-making approaches are, as far as I can tell, not at all relevant to their core argument). They have maybe shown something like: “plausible deviations from expected utility theory do not undermine the core argument”.
Let me know what you think.
Catch ya about :-)
@ Ben_Chugg
Curious how much you would agree with a statement like:
If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the “perfect information on expected value” is a strange claim to make.
But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.
To give another analogy I think there is a reasonable long-termist equivalent of statements like:
Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.
This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.
Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI’s goals, this paper is about engaging philosophy academics rather than producing any novel insight.
(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)
Thanks for this! All interesting and I will have to think about this more carefully when my brain is fresher. I admit I’m not very familiar with the literature on Knightian uncertainty and it would probably help if I read some more about that first.
OK if I understand you correctly, what you have said is that Will and Hilary present Knightian uncertainty as axiologically different to EV reasoning, when you don’t think it is. I agree with you that ideally section 4.5 should be considering some axiologically different decision-making theories to EV.
Regarding the actual EV calculations with numbers, I would say, as I did in a different comment, that I think it is pretty clear that they only carry out EV calculations for illustrative purposes. To quote:
This is the point they are trying to get across by doing the actual EV calculations.