Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it’s sounding less like “longtermism is wrong” and more like “maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks”.
I think that’s a pretty interestingly different objection and if it’s what you actually want to say it could be important to make sure that people don’t hear it as “longtermism is wrong” (because that could lead them to looking at the wrong type of thing to try to refute you).
I think that ontology used by Greaves+MacAskill is poor. I skim-read their Case for Strong Longtermsim paper honestly expecting it to be great (Will is generally pretty sensible) but I came away quite confused as to what case was being made.
Ben – maybe there needs to be more of an exercise to disentangle what is meant by longtermism before it can be critiqued fairly.
Owen – I am not sure if you would agree but I as far as I can tell the points you make about bounded rationality in the excellent post you link to above contradicts the the Case for Strong Longtermsim paper. EG:
Greaves+MacAskill: “we assumed that the correct way to evaluate options … is in terms of expected value” (as far as I can tell their entire point is that you can always do an expected value calculation and “ignore all the effects contained in the first 100″ years).
You: “if we want to make decisions on longtermist grounds, we are going to end up using some heuristics”
(as far as I can tell their entire point is that you can always do an expected value calculation and “ignore all the effects contained in the first 100” years)
Yes, exactly. One can always find some expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of “falsifiability” or “irrefutability” or whatever—the word choice doesn’t really matter here. What matters is that this is a very dangerous game to be playing.
I think it is worth trying to judge the paper / case for longtermism charitably. I do not honestly think that Will means that we can literally ignore everything in the first 100 years – for a start just because the short-term affects the long-term. If you want to evaluate interventions, even those designed for long-term impact, you need to look at the short-term impacts.
But that is where I get stuck trying to work out what Will + Hillary mean. I think they are saying more than just you should look at the long and short term effects of interventions (trivially true under most ethical views).
They seem to be making empirical, not philosophical, claims about the current state of the world.
They appear to argue that if you use expected value calculations for decision making then you will arrive at the conclusions that suggest that you should care about highly speculative long-term effects over clear short term effects. They combine this with an assumption that expected value calculations are the correct decision making tool to conclude that long-term interventions are most likely to be the best interventions.
I think
the logic of the argument is roughly correct.
the empirical claims made are dubious and ideally need more than a few examples to justify, but it is plausible they are correct. I think there is at least a decent case for marginal extra resources being directed to x-risk prevention in the world today.
the assumption that expected value calculations are the correct decision making tool is incorrect, (as per others at GPI like Owen’s work and Andreas’ work, bounded rationality, the entire field of risk management, economists like Taleb, knightian uncertainty, etc. etc) . A charitable reading would say that they recognises this is an assumption but chooses not to address it.
Hmmm… I now feel I have a slightly better grasp of what the arguments are after having written that. (Ben I think this counts as disentangling some of the claims made and more such work could be useful)
Vadmas – I think there can be grounds for refusing to follow arguments that you cannot disprove based solely on the implausibility or repugnance of their conclusions, which appears to be your response to their paper. I am not sure it is needed as I don’t think think the case for strong longtermism is well made.
I’d say they mean you can effectively ignore the differences in terminal value in the short term, e.g. the welfare of individuals in the short term only really matters for informing long-term consequences and effectively not in itself, since it’s insignificant compared to differences in long-term value.
In other words, short-term welfare is effectively not an end in itself.
It is of course a feature of trying to prioritise between causes in order to do the most good, that some groups will be effectively ignored.
Luckily in this case if done in a sensible manner I would expect that there should be a strong correlation between short term welfare and long-run welfare. As managing high uncertainty should involve some amount of ensuring good feedback loops and iterating, so taking action changing things for the better (for the long run but in a way that affects the world now) learning and improving. Building the EA community, developing clean meat, improving policy making, etc.
(Unfortunately I am not sure to what extent this is a key part of the EA longtermist paradigm at present.)
I think the assumption that expected value calculations are the correct decision making tool is incorrect, (as per others at GPI like Owen’s work and Andreas’ work, bounded rationality, the entire field of risk management, economists like Taleb, knightian uncertainty, etc. etc) . A charitable reading would say that they recognises this is an assumption but chooses not to address it.
Hmm perhaps you need to read the paper again. They say for example:
In sections 2 and 3, we will at times help ourselves to some popular but controversial axiological and decision-theoretic assumptions (specifically, total utilitarianism and expected utility theory). This, however, is mainly for elegance of exposition. Section 4 conducts the corresponding sensitivity analyses, and argues that plausible ways of deviating from these assumptions are unlikely to undermine the argument
Indeed they go on in section 4.5 to consider other decision theories, including Knightian uncertainty, and conclude that strong longtermism is robust to these other theories. I’m not saying they’re definitely right, just that they haven’t assumed expected value theory is correct as you claim.
So, my initial reading of 4.5 was that they get it very very wrong.
Eg: “we assumed that the correct way to evaluate options in ex ante axiological terms, under conditions of uncertainty, is in terms of expected value”. Any of the points above would disagree with this.
Eg: “[Knightian uncertainty] supports, rather than undermining, axiological strong longtermism”. This is just not true. Some Knightian uncertainty methods would support (eg robust decision making) and some would not support (eg plan-and-adapt).
So why does it look like they get this so wrong?
Maybe they are trying to achieve something different from what we in this thread think they are trying to achieve.
My analysis of their analysis of Knightian uncertainty can shed some light here.
The point of Knightian (or deep) uncertainty tools is that an expected value calculation is the wrong tool for humans to use when making decisions under Knightian uncertainty. That an expected value calculation, as a decision tool it will not lead to the best outcome, the outcome with the highest true expected value. [Note: I use true expected value to mean the expected value if there was no uncertainty, which can be different from the calculated expected value.] The aim is still the same (to maximise true expected value) but the approach is different. Why the different approach – because in practice expected value calculations do not work well – they lead to anchoring, lead to unknown unknows being ignored, are super sensitive to speculation, etc, etc. The tools used are varied but include tactics such as encouraging decision makers to aim for an option that is satisficing (least bad) on a variety of domains rather than maximising (this specific tool is to minimise the risk of unknown unknows being ignored).
But when Will+Hillary explain Knightian uncertainty they explain it as if it is posing a fundamental axiological difference. As if aiming for the least bad option is done because the least bad option is true best option (as opposed to the calculated best option, if that makes sense). This is not at all what anyone I know who uses these tools believes.
Let’s pause and note that as Knightian uncertainty tools are still aiming at guiding actors towards the true highest expected value they could theoretically be explained in terms of expected value. They don’t challenge the expected value axiology
Clearly Will+Hillary are not, in this paper, interested in if it poses an alternative methodology to reaching the true expected value, they are only interested in if it could be used to justify a different axiology. This would explain why this paper ignore all the other tools (like predict-then-act tools) focuses on this one tool and explains it in a strange way.
The case they are making (by my charitable reading) is that if we are aiming for true expected value then, because the future is so so so so so big that we should expect to be able to find at least some options that influence it and the thing that does the most good is likely to be among those options.
They chose expected value calculations as a way to illustrate this.
As Owen says here, they are “talking about how an ideal rational actor should behave – which I think is informative but not something to be directly emulate”.
They do not seem to be aiming to say anything on how to make decisions about what to focus on.
So I stand by my claim that the most charitable reading is that they are deliberately not addressing how to make decisions.
--
As far as I can tell, in layman speak, this paper tries to make the case that: If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
FWIW I expect this paper is largely correct (if the conclusion is as above). However I think could be improved in some ways:
It is opaque. Maybe it is clearer to fellow philosophers but I reached my view of what the paper was trying to achieve by looking at how they manage to mis-explain a core decision making concept two-thirds of they way through and then extrapolated the ways they could be rationally making their apparent errors. Not easy to understand what they are doing. And I think most people on this thread would have a different view to me about this paper. Would be good if a bit more text for us layfolk.
It could be misconstrued. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either poor or I really don’t get it.)
It leaves unanswered questions, but does not make it clear what those questions are. I do think it is useful to know that we should expect the most high impact actions to be those that have long run positive consequences. But how the hell should anyone actually make a decision and compare short term and long term? This paper does not help on this. It could maybe highlight the need to research this.
Is is a weak argument. It is plausible to me that alternative decision making tools might confuse their conclusions so much that when applied in practice by a philanthropist etc the result largely does not apply.
For example one could believe that economic growth is good for the future, that most people who try to impact the world positively without RCT-level evidence fail, that situations of high uncertainty are best resolved though engineering short feedback loops and quite rationally conclude that AMF (bednets) is the currently charity that has the biggest positive long-run affect on the future. I don’t think this contradicts anything in the paper and I don’t think it would be unreasonable.
There are other flaws with the paper too in the more empirical part with all the examples. Eg even a very very low discount rate to account for things like extinction risk or sudden windfall really quickly reduces the amount the future matters. (Note this is different from pure time preference discounting).
In my view they overstate (or are misleading about) what they have achieved. Eg I do not think, for the reasons given, that they have at all shown that “plausible deviations from [an expected utility treatment of decision-making under uncertainty] do not undermine the core argument”. (This is only true insofar as decision-making approaches are, as far as I can tell, not at all relevant to their core argument). They have maybe shown something like: “plausible deviations from expected utility theory do not undermine the core argument”.
Curious how much you would agree with a statement like:
If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the “perfect information on expected value” is a strange claim to make.
But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.
To give another analogy I think there is a reasonable long-termist equivalent of statements like:
Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.
This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.
Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI’s goals, this paper is about engaging philosophy academics rather than producing any novel insight.
(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)
Thanks for this! All interesting and I will have to think about this more carefully when my brain is fresher. I admit I’m not very familiar with the literature on Knightian uncertainty and it would probably help if I read some more about that first.
It is misleading. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either really poor or I really don’t get it.)
OK if I understand you correctly, what you have said is that Will and Hilary present Knightian uncertainty as axiologically different to EV reasoning, when you don’t think it is. I agree with you that ideally section 4.5 should be considering some axiologically different decision-making theories to EV.
Regarding the actual EV calculations with numbers, I would say, as I did in a different comment, that I think it is prettyclear that they only carry out EV calculations for illustrative purposes. To quote:
Of course, in either case one could debate these numbers. But, to repeat, all we need is that there be one course of action such that one ought to have a non-minuscule credence in that action’s having non-negligible long-lasting influence. Given the multitude of plausible ways by which one could have such influence, diverse points of view are likely to agree on this claim
This is the point they are trying to get across by doing the actual EV calculations.
I agree that there’s a tension in how we’re talking about it. I think that Greaves+MacAskill are talking about how an ideal rational actor should behave—which I think is informative but not something to be directly emulated for boundedly rational actors.
Ah yes thank you Owen. That helps me construct a sensible positive charitable reading of their paper.
There is of course a risk that people take their paper / views of longtermism and expected value approach to be more decision guiding than perhaps they ought.
(I think it might be an overly charitable reading – the paper does briefly mention and then dismiss concerns about decision making under uncertainty, etc – although it is only a draft so reasonable to be charitable.)
Oh interesting. Did you read my critique as saying that the philosophy is wrong? (Not sarcastic; serious question.) I don’t really even know what “wrong” would mean here, honestly. I think the reasoning is flawed and if taken seriously leads to bad consequences.
I read your second critique as implicitly saying “there must be a mistake in the argument”, whereas I’d have preferred it to say “the things that might be thought to follow from this argument are wrong (which could mean a mistake in the argument that’s been laid out, or in how its consequences are being interpreted)”.
Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it’s sounding less like “longtermism is wrong” and more like “maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks”.
I think that’s a pretty interestingly different objection and if it’s what you actually want to say it could be important to make sure that people don’t hear it as “longtermism is wrong” (because that could lead them to looking at the wrong type of thing to try to refute you).
I think that ontology used by Greaves+MacAskill is poor. I skim-read their Case for Strong Longtermsim paper honestly expecting it to be great (Will is generally pretty sensible) but I came away quite confused as to what case was being made.
Ben – maybe there needs to be more of an exercise to disentangle what is meant by longtermism before it can be critiqued fairly.
Owen – I am not sure if you would agree but I as far as I can tell the points you make about bounded rationality in the excellent post you link to above contradicts the the Case for Strong Longtermsim paper. EG:
Greaves+MacAskill: “we assumed that the correct way to evaluate options … is in terms of expected value” (as far as I can tell their entire point is that you can always do an expected value calculation and “ignore all the effects contained in the first 100″ years).
You: “if we want to make decisions on longtermist grounds, we are going to end up using some heuristics”
Yes, exactly. One can always find some expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of “falsifiability” or “irrefutability” or whatever—the word choice doesn’t really matter here. What matters is that this is a very dangerous game to be playing.
I think it is worth trying to judge the paper / case for longtermism charitably. I do not honestly think that Will means that we can literally ignore everything in the first 100 years – for a start just because the short-term affects the long-term. If you want to evaluate interventions, even those designed for long-term impact, you need to look at the short-term impacts.
But that is where I get stuck trying to work out what Will + Hillary mean. I think they are saying more than just you should look at the long and short term effects of interventions (trivially true under most ethical views).
They seem to be making empirical, not philosophical, claims about the current state of the world.
They appear to argue that if you use expected value calculations for decision making then you will arrive at the conclusions that suggest that you should care about highly speculative long-term effects over clear short term effects. They combine this with an assumption that expected value calculations are the correct decision making tool to conclude that long-term interventions are most likely to be the best interventions.
I think
the logic of the argument is roughly correct.
the empirical claims made are dubious and ideally need more than a few examples to justify, but it is plausible they are correct. I think there is at least a decent case for marginal extra resources being directed to x-risk prevention in the world today.
the assumption that expected value calculations are the correct decision making tool is incorrect, (as per others at GPI like Owen’s work and Andreas’ work, bounded rationality, the entire field of risk management, economists like Taleb, knightian uncertainty, etc. etc) . A charitable reading would say that they recognises this is an assumption but chooses not to address it.
Hmmm… I now feel I have a slightly better grasp of what the arguments are after having written that. (Ben I think this counts as disentangling some of the claims made and more such work could be useful)
Vadmas – I think there can be grounds for refusing to follow arguments that you cannot disprove based solely on the implausibility or repugnance of their conclusions, which appears to be your response to their paper. I am not sure it is needed as I don’t think think the case for strong longtermism is well made.
I’d say they mean you can effectively ignore the differences in terminal value in the short term, e.g. the welfare of individuals in the short term only really matters for informing long-term consequences and effectively not in itself, since it’s insignificant compared to differences in long-term value.
In other words, short-term welfare is effectively not an end in itself.
Yeah that is a good way of putting it. Thank you.
It is of course a feature of trying to prioritise between causes in order to do the most good, that some groups will be effectively ignored.
Luckily in this case if done in a sensible manner I would expect that there should be a strong correlation between short term welfare and long-run welfare. As managing high uncertainty should involve some amount of ensuring good feedback loops and iterating, so taking action changing things for the better (for the long run but in a way that affects the world now) learning and improving. Building the EA community, developing clean meat, improving policy making, etc.
(Unfortunately I am not sure to what extent this is a key part of the EA longtermist paradigm at present.)
Hmm perhaps you need to read the paper again. They say for example:
Indeed they go on in section 4.5 to consider other decision theories, including Knightian uncertainty, and conclude that strong longtermism is robust to these other theories. I’m not saying they’re definitely right, just that they haven’t assumed expected value theory is correct as you claim.
OK Jack, I have some time today so lets dive in:
So, my initial reading of 4.5 was that they get it very very wrong.
Eg: “we assumed that the correct way to evaluate options in ex ante axiological terms, under conditions of uncertainty, is in terms of expected value”. Any of the points above would disagree with this.
Eg: “[Knightian uncertainty] supports, rather than undermining, axiological strong longtermism”. This is just not true. Some Knightian uncertainty methods would support (eg robust decision making) and some would not support (eg plan-and-adapt).
So why does it look like they get this so wrong?
Maybe they are trying to achieve something different from what we in this thread think they are trying to achieve.
My analysis of their analysis of Knightian uncertainty can shed some light here.
The point of Knightian (or deep) uncertainty tools is that an expected value calculation is the wrong tool for humans to use when making decisions under Knightian uncertainty. That an expected value calculation, as a decision tool it will not lead to the best outcome, the outcome with the highest true expected value. [Note: I use true expected value to mean the expected value if there was no uncertainty, which can be different from the calculated expected value.] The aim is still the same (to maximise true expected value) but the approach is different. Why the different approach – because in practice expected value calculations do not work well – they lead to anchoring, lead to unknown unknows being ignored, are super sensitive to speculation, etc, etc. The tools used are varied but include tactics such as encouraging decision makers to aim for an option that is satisficing (least bad) on a variety of domains rather than maximising (this specific tool is to minimise the risk of unknown unknows being ignored).
But when Will+Hillary explain Knightian uncertainty they explain it as if it is posing a fundamental axiological difference. As if aiming for the least bad option is done because the least bad option is true best option (as opposed to the calculated best option, if that makes sense). This is not at all what anyone I know who uses these tools believes.
Let’s pause and note that as Knightian uncertainty tools are still aiming at guiding actors towards the true highest expected value they could theoretically be explained in terms of expected value. They don’t challenge the expected value axiology
Clearly Will+Hillary are not, in this paper, interested in if it poses an alternative methodology to reaching the true expected value, they are only interested in if it could be used to justify a different axiology. This would explain why this paper ignore all the other tools (like predict-then-act tools) focuses on this one tool and explains it in a strange way.
The case they are making (by my charitable reading) is that if we are aiming for true expected value then, because the future is so so so so so big that we should expect to be able to find at least some options that influence it and the thing that does the most good is likely to be among those options.
They chose expected value calculations as a way to illustrate this.
As Owen says here, they are “talking about how an ideal rational actor should behave – which I think is informative but not something to be directly emulate”.
They do not seem to be aiming to say anything on how to make decisions about what to focus on.
So I stand by my claim that the most charitable reading is that they are deliberately not addressing how to make decisions.
--
As far as I can tell, in layman speak, this paper tries to make the case that: If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
FWIW I expect this paper is largely correct (if the conclusion is as above). However I think could be improved in some ways:
It is opaque. Maybe it is clearer to fellow philosophers but I reached my view of what the paper was trying to achieve by looking at how they manage to mis-explain a core decision making concept two-thirds of they way through and then extrapolated the ways they could be rationally making their apparent errors. Not easy to understand what they are doing. And I think most people on this thread would have a different view to me about this paper. Would be good if a bit more text for us layfolk.
It could be misconstrued. Work like this leads people to think that Will+Hillary and others believe that expected value calculations are the key tool for decision making. They are not. (I am assuming they only reference expected value calculations for illustrative purposes, if I am incorrect then their paper is either poor or I really don’t get it.)
It leaves unanswered questions, but does not make it clear what those questions are. I do think it is useful to know that we should expect the most high impact actions to be those that have long run positive consequences. But how the hell should anyone actually make a decision and compare short term and long term? This paper does not help on this. It could maybe highlight the need to research this.
Is is a weak argument. It is plausible to me that alternative decision making tools might confuse their conclusions so much that when applied in practice by a philanthropist etc the result largely does not apply.
For example one could believe that economic growth is good for the future, that most people who try to impact the world positively without RCT-level evidence fail, that situations of high uncertainty are best resolved though engineering short feedback loops and quite rationally conclude that AMF (bednets) is the currently charity that has the biggest positive long-run affect on the future. I don’t think this contradicts anything in the paper and I don’t think it would be unreasonable.
There are other flaws with the paper too in the more empirical part with all the examples. Eg even a very very low discount rate to account for things like extinction risk or sudden windfall really quickly reduces the amount the future matters. (Note this is different from pure time preference discounting).
In my view they overstate (or are misleading about) what they have achieved. Eg I do not think, for the reasons given, that they have at all shown that “plausible deviations from [an expected utility treatment of decision-making under uncertainty] do not undermine the core argument”. (This is only true insofar as decision-making approaches are, as far as I can tell, not at all relevant to their core argument). They have maybe shown something like: “plausible deviations from expected utility theory do not undermine the core argument”.
Let me know what you think.
Catch ya about :-)
@ Ben_Chugg
Curious how much you would agree with a statement like:
If we had perfect information [edit: on expected value] the options that would be best to do would be those that positively affect the far future. So in practice looking towards those kinds of options is a useful tool to apply when we are deciding what to do.
(This is my very charitable, weak interpretation of what the Case for Strong Longtermism paper is attempting to argue)
I think I agree, but there’s a lot smuggled into the phrase “perfect information on expected value”. So much in fact that I’m not sure I can quite follow the thought experiment.
When I think of “perfect information on expected value”, my first thought is something like a game of roulette. There’s no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient—we would know what sort of future knowledge will be developed, etc. Is this also what you had in mind?
(To complicate matters, the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it’s only a model. We could also study the precise physics of that particular roulette board, model the hand spinning the wheel (is that how roulette works? I don’t even know), take into account the initial position, the toss of the white ball, and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it’s unclear to me what the model would be for the future which allowed us to write down a precise model—which is implicitly required for EV calculations).
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the “perfect information on expected value” is a strange claim to make.
But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.
To give another analogy I think there is a reasonable long-termist equivalent of statements like:
Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.
This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.
Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI’s goals, this paper is about engaging philosophy academics rather than producing any novel insight.
(If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)
Thanks for this! All interesting and I will have to think about this more carefully when my brain is fresher. I admit I’m not very familiar with the literature on Knightian uncertainty and it would probably help if I read some more about that first.
OK if I understand you correctly, what you have said is that Will and Hilary present Knightian uncertainty as axiologically different to EV reasoning, when you don’t think it is. I agree with you that ideally section 4.5 should be considering some axiologically different decision-making theories to EV.
Regarding the actual EV calculations with numbers, I would say, as I did in a different comment, that I think it is pretty clear that they only carry out EV calculations for illustrative purposes. To quote:
This is the point they are trying to get across by doing the actual EV calculations.
I agree that there’s a tension in how we’re talking about it. I think that Greaves+MacAskill are talking about how an ideal rational actor should behave—which I think is informative but not something to be directly emulated for boundedly rational actors.
Ah yes thank you Owen. That helps me construct a sensible positive charitable reading of their paper.
There is of course a risk that people take their paper / views of longtermism and expected value approach to be more decision guiding than perhaps they ought.
(I think it might be an overly charitable reading – the paper does briefly mention and then dismiss concerns about decision making under uncertainty, etc – although it is only a draft so reasonable to be charitable.)
Oh interesting. Did you read my critique as saying that the philosophy is wrong? (Not sarcastic; serious question.) I don’t really even know what “wrong” would mean here, honestly. I think the reasoning is flawed and if taken seriously leads to bad consequences.
I read your second critique as implicitly saying “there must be a mistake in the argument”, whereas I’d have preferred it to say “the things that might be thought to follow from this argument are wrong (which could mean a mistake in the argument that’s been laid out, or in how its consequences are being interpreted)”.