I do think itâs possible to think up circumstances of genuine âsimple cluelessnessâ though where, from a subjective standpoint, we really donât have any reasons to think one option may be better or worse than the alternative.
So far, I feel Iâve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless itâs the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on.
Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:
One chair might be closer, so walking to it expends less energy and/âor takes less time, which has various knock-on effects
One chair will be closer to some other object in the world, making it easier for you to hear whatâs going on over there and for people over there to hear you, which could have various knock-on effects
One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever
There doesnât seem to be any point stressing about this decision (assuming there isnât some obvious consideration to take into account)
I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness.
My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:
Youâll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
This seems like commonsense, and also is in line with the âsuspicious convergenceâ idea
Plans like âdonate to AMFâ were not highlighted to improve the very long-term future
Plans like âdonate to reduce AI x-riskâ were highlighted largely to improve the very long-term future
A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
But then thereâs also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so itâs not immediately obvious whether we should focus on the long-term or the short-term. This is where work like âThe Epistemic Challenge to Longtermismâ and âFormalising the âWashing Out Hypothesisââ become very useful.
Also, there are many situations where itâs not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
This is basically just a point about value of information and opportunity cost; it doesnât require a notion of absolute evidential symmetry
(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)
so I think Greavesâ work has been useful.
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyoneâwhoâve thought about the topic think the cluelessness stuff is much more useful than I think it is (Iâm just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.
So far, I feel Iâve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless itâs the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isnât worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesnât seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesnât seem too far-fetched to me (letâs say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the otherâotherwise they fall prey to paralysis. Admittedly I havenât read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.
EDIT: To be honest Iâm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we canât really form precise credences.
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyoneâwhoâve thought about the topic think the cluelessness stuff is much more useful than I think it is
For me, Greavesâ work on cluelessness just highlighted to me a problem I didnât think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they donât discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that donât fall prey to problem, which might be the longtermist ones that Greaves puts forward (although Iâm uncertain about this).
I think simple cluelessness is a subjective state.
I havenât read the relevant papers since last year, but I think I recall the idea being not just that we currently donât have a sense of what the long-term effects of an action are, but also that we basically canât gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are âutterly unpredictableââa much stronger claim that just that we currently have no real prediction.
(Iâm writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the otherâotherwise they fall prey to paralysis.
I donât think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesnât. This can be true whether you can or canât immediately see any difference in the expected value of the two âconcrete choicesâ (this is a term Iâm making up to exclude the choice to do further analysis).
E.g., I donât spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I donât explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values weâd guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I donât think the concept of âsimple cluelessnessâ is necessary, and I think itâs unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that thereâs something special about certain cases where one has extremely little knowledge, whereas really whatâs key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have
So far, I feel Iâve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless itâs the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:
One chair might be closer, so walking to it expends less energy and/âor takes less time, which has various knock-on effects
One chair will be closer to some other object in the world, making it easier for you to hear whatâs going on over there and for people over there to hear you, which could have various knock-on effects
One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever
I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness.
My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:
Youâll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
This seems like commonsense, and also is in line with the âsuspicious convergenceâ idea
Plans like âdonate to AMFâ were not highlighted to improve the very long-term future
Plans like âdonate to reduce AI x-riskâ were highlighted largely to improve the very long-term future
A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
But then thereâs also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so itâs not immediately obvious whether we should focus on the long-term or the short-term. This is where work like âThe Epistemic Challenge to Longtermismâ and âFormalising the âWashing Out Hypothesisââ become very useful.
Also, there are many situations where itâs not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
This is basically just a point about value of information and opportunity cost; it doesnât require a notion of absolute evidential symmetry
(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyoneâwhoâve thought about the topic think the cluelessness stuff is much more useful than I think it is (Iâm just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isnât worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesnât seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesnât seem too far-fetched to me (letâs say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the otherâotherwise they fall prey to paralysis. Admittedly I havenât read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.
EDIT: To be honest Iâm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we canât really form precise credences.
For me, Greavesâ work on cluelessness just highlighted to me a problem I didnât think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they donât discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that donât fall prey to problem, which might be the longtermist ones that Greaves puts forward (although Iâm uncertain about this).
I havenât read the relevant papers since last year, but I think I recall the idea being not just that we currently donât have a sense of what the long-term effects of an action are, but also that we basically canât gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are âutterly unpredictableââa much stronger claim that just that we currently have no real prediction.
(And I think that that idea is very problematic, as discussed elsewhere in this thread..)
(Iâm writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I donât think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesnât. This can be true whether you can or canât immediately see any difference in the expected value of the two âconcrete choicesâ (this is a term Iâm making up to exclude the choice to do further analysis).
E.g., I donât spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I donât explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values weâd guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I donât think the concept of âsimple cluelessnessâ is necessary, and I think itâs unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that thereâs something special about certain cases where one has extremely little knowledge, whereas really whatâs key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have