I do think it’s possible to think up circumstances of genuine ‘simple cluelessness’ though where, from a subjective standpoint, we really don’t have any reasons to think one option may be better or worse than the alternative.
So far, I feel I’ve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it’s the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on.
Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:
One chair might be closer, so walking to it expends less energy and/or takes less time, which has various knock-on effects
One chair will be closer to some other object in the world, making it easier for you to hear what’s going on over there and for people over there to hear you, which could have various knock-on effects
One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever
There doesn’t seem to be any point stressing about this decision (assuming there isn’t some obvious consideration to take into account)
I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness.
My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:
You’ll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
This seems like commonsense, and also is in line with the “suspicious convergence” idea
Plans like “donate to AMF” were not highlighted to improve the very long-term future
Plans like “donate to reduce AI x-risk” were highlighted largely to improve the very long-term future
A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
But then there’s also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so it’s not immediately obvious whether we should focus on the long-term or the short-term. This is where work like “The Epistemic Challenge to Longtermism” and “Formalising the ‘Washing Out Hypothesis’” become very useful.
Also, there are many situations where it’s not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
This is basically just a point about value of information and opportunity cost; it doesn’t require a notion of absolute evidential symmetry
(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)
so I think Greaves’ work has been useful.
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone—who’ve thought about the topic think the cluelessness stuff is much more useful than I think it is (I’m just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.
So far, I feel I’ve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it’s the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isn’t worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn’t seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesn’t seem too far-fetched to me (let’s say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other—otherwise they fall prey to paralysis. Admittedly I haven’t read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.
EDIT: To be honest I’m not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we can’t really form precise credences.
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone—who’ve thought about the topic think the cluelessness stuff is much more useful than I think it is
For me, Greaves’ work on cluelessness just highlighted to me a problem I didn’t think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they don’t discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that don’t fall prey to problem, which might be the longtermist ones that Greaves puts forward (although I’m uncertain about this).
I think simple cluelessness is a subjective state.
I haven’t read the relevant papers since last year, but I think I recall the idea being not just that we currently don’t have a sense of what the long-term effects of an action are, but also that we basically can’t gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are “utterly unpredictable”—a much stronger claim that just that we currently have no real prediction.
(I’m writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other—otherwise they fall prey to paralysis.
I don’t think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesn’t. This can be true whether you can or can’t immediately see any difference in the expected value of the two “concrete choices” (this is a term I’m making up to exclude the choice to do further analysis).
E.g., I don’t spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I don’t explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values we’d guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I don’t think the concept of “simple cluelessness” is necessary, and I think it’s unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that there’s something special about certain cases where one has extremely little knowledge, whereas really what’s key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have
So far, I feel I’ve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it’s the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:
One chair might be closer, so walking to it expends less energy and/or takes less time, which has various knock-on effects
One chair will be closer to some other object in the world, making it easier for you to hear what’s going on over there and for people over there to hear you, which could have various knock-on effects
One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever
I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness.
My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:
You’ll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
This seems like commonsense, and also is in line with the “suspicious convergence” idea
Plans like “donate to AMF” were not highlighted to improve the very long-term future
Plans like “donate to reduce AI x-risk” were highlighted largely to improve the very long-term future
A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
But then there’s also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so it’s not immediately obvious whether we should focus on the long-term or the short-term. This is where work like “The Epistemic Challenge to Longtermism” and “Formalising the ‘Washing Out Hypothesis’” become very useful.
Also, there are many situations where it’s not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
This is basically just a point about value of information and opportunity cost; it doesn’t require a notion of absolute evidential symmetry
(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone—who’ve thought about the topic think the cluelessness stuff is much more useful than I think it is (I’m just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isn’t worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn’t seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesn’t seem too far-fetched to me (let’s say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other—otherwise they fall prey to paralysis. Admittedly I haven’t read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.
EDIT: To be honest I’m not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we can’t really form precise credences.
For me, Greaves’ work on cluelessness just highlighted to me a problem I didn’t think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they don’t discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that don’t fall prey to problem, which might be the longtermist ones that Greaves puts forward (although I’m uncertain about this).
I haven’t read the relevant papers since last year, but I think I recall the idea being not just that we currently don’t have a sense of what the long-term effects of an action are, but also that we basically can’t gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are “utterly unpredictable”—a much stronger claim that just that we currently have no real prediction.
(And I think that that idea is very problematic, as discussed elsewhere in this thread..)
(I’m writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I don’t think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesn’t. This can be true whether you can or can’t immediately see any difference in the expected value of the two “concrete choices” (this is a term I’m making up to exclude the choice to do further analysis).
E.g., I don’t spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I don’t explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values we’d guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I don’t think the concept of “simple cluelessness” is necessary, and I think it’s unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that there’s something special about certain cases where one has extremely little knowledge, whereas really what’s key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have