There are indeed scenarios when we can safely ignore long-term effects. To steal an example from Phil Trammellâs note on cluelessness, when we are deciding whether to conceive a child on a Tuesday or a Wednesday, any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices. Hilary Greaves has dubbed such a scenario âsimple cluelessnessâ, and argues, in this case, that we are justified in ignoring long-run effects. However it seems that often we donât have such evidential symmetry. In the conception example we simply canât say anything about the long-term effects of choosing to conceive a child on a particular day, and so we have evidential symmetry.
I think youâre accurately reflecting what these authors say, but my independent impression is that this point is mistaken. I.e., I think the idea of a qualitative distinction between simple cluelessness and complex cluelessness doesnât make sense.
I describe my position, and why I think the examples typically used donât support the points the authors want them to support, here. Here Iâll just briefly suggest some reasons why we can say something about the long-term effects of choosing to conceive a child on a particular day:
One day later means the child should be expected to be born roughly one day later, and thus will be roughly one day younger at any future point. This probably very slightly slows down GDP growth, intellectual progress, population growth (via the child later having their own children), growth in carbon emissions due to what the child does themselves, maybe also cuts in carbon emission due to tech advancement or policy change or whatever, etc. Then this could be good or bad for the long-term future based on whether things like GDP growth, intellectual progress, population growth, etc. are good or bad for the long-term future, which it also seems we can say something about (see e.g. Differential Progress).
I donât want to start a pointless industry of alternatively âshooting downâ & refining purported cases of simple cluelessness, but just for fun here is another reason for why our cluelessness regarding âconceiving a child on Tuesday vs. Wednesdayâ really is complex:
Shifting the time of conception by one day (ignoring the empirical complication pointed out by Denise below) also shifts the probability distribution of birth date by weekday, e.g. whether the babyâs birth occurs on a Tuesday or Wednesday. However, for all we know the weekday of birth has a systematic effect on birth-related health outcomes of mother or child. For instance, consider some medical complication occurring during labor with weekday-independent probability, which needs to be treated in a hospital. We might then worry that on a Wednesday healthcare workers will tend to be more overworked, and so slightly more likely to make mistakes, than on a Tuesday (because many of them will have had the weekend off and so on Wednesday theyâve been through a larger period of workdays without significant time off). On the other hand, we might think that people are reluctant to go to a hospital on a weekend such that thereâll be a ârushâ on hospitals on Mondays, which takes until Wednesday to âclearââmaking in fact Monday or Tuesday more stressful for healthcare workers. And so on and so on âŚ
Iâm sure many of these studies are terrible but their existence illustrates that it might be pretty hard to justify an epistemic state that is committed to the effect of different weekdays exactly canceling out.)
((It doesnât help if we could work out the net effect on all health outcomes at birth, say b/âc we can look at empirical data from hospitals. Presumably some non-zero net effect on e.g. whether or not we increase the total human population by 1 at an earlier time would remain, and then weâre caught in the âstandardâ complex cluelessness problem of working out whether the long-term effects of this are net positive or net negative etc.))
Iâm wondering if a better definition of simple cluelessness would be something like: âWhile the effects donât âcancel outâ, we are justified in believing that their net effect will be small compared to differences in short-term effects.â
Iâm wondering if a better definition of simple cluelessness would be something like: âWhile the effects donât âcancel outâ, we are justified in believing that their net effect will be small compared to differences in short-term effects.â
I think that thatâs clearly a good sort of sentence to say. But:
I donât think we need the simple vs complex cluelessnessâ idea to say that
I really donât want us to use the term âcluelessâ for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying âutterly unpredictableâ here).
I donât want us to have two terms that (a) sound like theyâre meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why canât we simply talk in terms of having more or less âresilientâ or âjustifiedâ credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we couldâve done with that time or those resources otherwise?
It seems like an approach thatâs more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
Itâs been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/âcomments (so I didnât e.g. read Trammellâs paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply canât say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, canât say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are âutterly unpredictableâ.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsneyâs epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We canât just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like thereâs something important thatâs just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.
When this post went up, I wrote virtually the same comment, but never sent it! Glad to see you write it up, as well as your below comments.
I have the impression that in each supposed example of âsimple cluelessnessâ people just arenât being creative enough to see the âcomplex cluelessnessâ factors, as you clarify with chairs in other comment.
My original comment even included saying how Philâs example of simple cluelessness is false, but itâs false for different reasons than you think:
If you try to conceive a child a day later, this will not in expectancy impact when the child will be born. The impact is actually much stronger than that. It will affect whether you are able to conceive in this cycle at all, since eggs can only be fertilized during a very brief window of time (12-24 hours). If you are too late, no baby.
To be honest Iâm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first placeâwhere we canât really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when theyâre trying to do the most good.
Maybe weâre also complexy clueless about what day to conceive a child on, or which chair to sit on, but we donât really have our âEA hat onâ when doing these things. In other words, Iâm not having a child to do the most good, Iâm doing it because I want to. So I guess in these circumstances I donât really care about my complex cluelessness. When giving to charity, I very much do care about any complex cluelessness because Iâm trying to do the most good and really thinking hard about how to do so.
Iâm still not sure if I would class myself as complexly clueless when deciding which chair to sit on (I think from a subjective standpoint I at least feel simplyclueless), but Iâm also not sure this particular debate really matters.
Iâm also inclined to agree with this. I actually only very recently realized that a similar point had also been made in the literature: in this 2019 âdiscussion noteâ by Lok Lam Yim, which is a reply to Greavesâs cluelessness paper:
This distinction between âsimpleâ and âcomplexâ cases of cluelessness, though an ingenious one, ultimately fails. Upon heightened scrutiny, a so-called âsimpleâ case often collapses into a âcomplexâ case. Let us consider Greavesâs example of a âsimpleâ case: helping an old lady cross the road. It is possible that this minor act of kindness has some impacts of systematic tendencies of a âcomplexâ nature. For instance, future social science research may show that old ladies often tell their grandchildren benevolent stories they have encountered to encourage their grandchildren to help others. Future psychological research may show that small children who are encouraged to help others are usually more charitable, and these children, upon reaching adulthood, are generally more sympathetic to the effective altruism movement, which Greaves considers a âcomplexâ case. This shows that a so-called âsimpleâ decision (such as whether to help an old lady to cross the road) can systematically lead to consequences of a âcomplexâ nature (such as an increase in the possibility of their grandchildren joining the effective altruism movement), thereby suffering from the same problem of genuine cluelessness as a âcomplexâ case.
Morally important actions are often, if not always, others-affecting. With the advancement of social science and psychological research, we are likely to discover that most others-concerning actions have some systematic impacts on others. These systematic impacts may lead to another chain of systematic impacts, and so on. Along the chain of systematic impacts, it is likely that at least one of them is of a âcomplexâ nature.
Interestingâthatâs fairly similar to the counterarguments I gave for the same case here:
I think all of [the three key criteria Greaves proposes for a case to involve complex cluelessness] actually appl[y] to the old lady case, just very speculatively. One reason to think [the first criterion applies] is that the old lady and/âor anyone witnessing your kind act and/âor anyone whoâs told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
Importantly, there isnât a âprecise counterpart, precisely as plausible as the originalâ, for this story. Thatâd have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.
One reason to think [the second criterion applies] for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
Your critique of the conception example might be fair actually. I do think itâs possible to think up circumstances of genuine âsimple cluelessnessâ though where, from a subjective standpoint, we really donât have any reasons to think one option may be better or worse than the alternative.
For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesnât seem to be any point stressing about this decision (assuming there isnât some obvious consideration to take into account), although it is certainly possible that choosing the left chair over the right chair could be a terrible decision ex post. So I do think this decision is qualitatively different to donating to AMF.
However I think the reason why Greaves introduces the distinction between complex and simple cluelessness is to save consequentialism from Lenmanâs cluelessness critique (going by hazy memory here). If a much wider class of decisions suffer from complex cluelessness than Greaves originally thought, this could prove problematic for her defence. Having said that, I do still think that something like working on AI alignment probably avoids complex cluelessness for the reasons I give in the post, so I think Greavesâ work has been useful.
I do think itâs possible to think up circumstances of genuine âsimple cluelessnessâ though where, from a subjective standpoint, we really donât have any reasons to think one option may be better or worse than the alternative.
So far, I feel Iâve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless itâs the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on.
Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:
One chair might be closer, so walking to it expends less energy and/âor takes less time, which has various knock-on effects
One chair will be closer to some other object in the world, making it easier for you to hear whatâs going on over there and for people over there to hear you, which could have various knock-on effects
One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever
There doesnât seem to be any point stressing about this decision (assuming there isnât some obvious consideration to take into account)
I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness.
My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:
Youâll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
This seems like commonsense, and also is in line with the âsuspicious convergenceâ idea
Plans like âdonate to AMFâ were not highlighted to improve the very long-term future
Plans like âdonate to reduce AI x-riskâ were highlighted largely to improve the very long-term future
A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
But then thereâs also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so itâs not immediately obvious whether we should focus on the long-term or the short-term. This is where work like âThe Epistemic Challenge to Longtermismâ and âFormalising the âWashing Out Hypothesisââ become very useful.
Also, there are many situations where itâs not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
This is basically just a point about value of information and opportunity cost; it doesnât require a notion of absolute evidential symmetry
(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)
so I think Greavesâ work has been useful.
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyoneâwhoâve thought about the topic think the cluelessness stuff is much more useful than I think it is (Iâm just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.
So far, I feel Iâve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless itâs the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isnât worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesnât seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesnât seem too far-fetched to me (letâs say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the otherâotherwise they fall prey to paralysis. Admittedly I havenât read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.
EDIT: To be honest Iâm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we canât really form precise credences.
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyoneâwhoâve thought about the topic think the cluelessness stuff is much more useful than I think it is
For me, Greavesâ work on cluelessness just highlighted to me a problem I didnât think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they donât discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that donât fall prey to problem, which might be the longtermist ones that Greaves puts forward (although Iâm uncertain about this).
I think simple cluelessness is a subjective state.
I havenât read the relevant papers since last year, but I think I recall the idea being not just that we currently donât have a sense of what the long-term effects of an action are, but also that we basically canât gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are âutterly unpredictableââa much stronger claim that just that we currently have no real prediction.
(Iâm writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the otherâotherwise they fall prey to paralysis.
I donât think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesnât. This can be true whether you can or canât immediately see any difference in the expected value of the two âconcrete choicesâ (this is a term Iâm making up to exclude the choice to do further analysis).
E.g., I donât spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I donât explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values weâd guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I donât think the concept of âsimple cluelessnessâ is necessary, and I think itâs unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that thereâs something special about certain cases where one has extremely little knowledge, whereas really whatâs key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have
I think youâre accurately reflecting what these authors say, but my independent impression is that this point is mistaken. I.e., I think the idea of a qualitative distinction between simple cluelessness and complex cluelessness doesnât make sense.
I describe my position, and why I think the examples typically used donât support the points the authors want them to support, here. Here Iâll just briefly suggest some reasons why we can say something about the long-term effects of choosing to conceive a child on a particular day:
One day later means the child should be expected to be born roughly one day later, and thus will be roughly one day younger at any future point. This probably very slightly slows down GDP growth, intellectual progress, population growth (via the child later having their own children), growth in carbon emissions due to what the child does themselves, maybe also cuts in carbon emission due to tech advancement or policy change or whatever, etc. Then this could be good or bad for the long-term future based on whether things like GDP growth, intellectual progress, population growth, etc. are good or bad for the long-term future, which it also seems we can say something about (see e.g. Differential Progress).
I donât want to start a pointless industry of alternatively âshooting downâ & refining purported cases of simple cluelessness, but just for fun here is another reason for why our cluelessness regarding âconceiving a child on Tuesday vs. Wednesdayâ really is complex:
Shifting the time of conception by one day (ignoring the empirical complication pointed out by Denise below) also shifts the probability distribution of birth date by weekday, e.g. whether the babyâs birth occurs on a Tuesday or Wednesday. However, for all we know the weekday of birth has a systematic effect on birth-related health outcomes of mother or child. For instance, consider some medical complication occurring during labor with weekday-independent probability, which needs to be treated in a hospital. We might then worry that on a Wednesday healthcare workers will tend to be more overworked, and so slightly more likely to make mistakes, than on a Tuesday (because many of them will have had the weekend off and so on Wednesday theyâve been through a larger period of workdays without significant time off). On the other hand, we might think that people are reluctant to go to a hospital on a weekend such that thereâll be a ârushâ on hospitals on Mondays, which takes until Wednesday to âclearââmaking in fact Monday or Tuesday more stressful for healthcare workers. And so on and so on âŚ
(This is all made up, but if I google for relevant terms I pretty quickly find studies such as Weekday of Surgery Affects Postoperative Complications and Long-Term Survival of Chinese Gastric Cancer Patients after Curative Gastrectomy or Outcomes are Worse in US Patients Undergoing Surgery on Weekends Compared With Weekdays or Influence of weekday of surgery on operative
complications. An analysis of 25.000 surgical procedures or âŚ
Iâm sure many of these studies are terrible but their existence illustrates that it might be pretty hard to justify an epistemic state that is committed to the effect of different weekdays exactly canceling out.)
((It doesnât help if we could work out the net effect on all health outcomes at birth, say b/âc we can look at empirical data from hospitals. Presumably some non-zero net effect on e.g. whether or not we increase the total human population by 1 at an earlier time would remain, and then weâre caught in the âstandardâ complex cluelessness problem of working out whether the long-term effects of this are net positive or net negative etc.))
Iâm wondering if a better definition of simple cluelessness would be something like: âWhile the effects donât âcancel outâ, we are justified in believing that their net effect will be small compared to differences in short-term effects.â
I think that thatâs clearly a good sort of sentence to say. But:
I donât think we need the simple vs complex cluelessnessâ idea to say that
I really donât want us to use the term âcluelessâ for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying âutterly unpredictableâ here).
I donât want us to have two terms that (a) sound like theyâre meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why canât we simply talk in terms of having more or less âresilientâ or âjustifiedâ credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we couldâve done with that time or those resources otherwise?
It seems like an approach thatâs more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
Itâs been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/âcomments (so I didnât e.g. read Trammellâs paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply canât say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, canât say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are âutterly unpredictableâ.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsneyâs epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We canât just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like thereâs something important thatâs just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.
When this post went up, I wrote virtually the same comment, but never sent it! Glad to see you write it up, as well as your below comments. I have the impression that in each supposed example of âsimple cluelessnessâ people just arenât being creative enough to see the âcomplex cluelessnessâ factors, as you clarify with chairs in other comment.
My original comment even included saying how Philâs example of simple cluelessness is false, but itâs false for different reasons than you think: If you try to conceive a child a day later, this will not in expectancy impact when the child will be born. The impact is actually much stronger than that. It will affect whether you are able to conceive in this cycle at all, since eggs can only be fertilized during a very brief window of time (12-24 hours). If you are too late, no baby.
To be honest Iâm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first placeâwhere we canât really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when theyâre trying to do the most good.
Maybe weâre also complexy clueless about what day to conceive a child on, or which chair to sit on, but we donât really have our âEA hat onâ when doing these things. In other words, Iâm not having a child to do the most good, Iâm doing it because I want to. So I guess in these circumstances I donât really care about my complex cluelessness. When giving to charity, I very much do care about any complex cluelessness because Iâm trying to do the most good and really thinking hard about how to do so.
Iâm still not sure if I would class myself as complexly clueless when deciding which chair to sit on (I think from a subjective standpoint I at least feel simply clueless), but Iâm also not sure this particular debate really matters.
Iâm also inclined to agree with this. I actually only very recently realized that a similar point had also been made in the literature: in this 2019 âdiscussion noteâ by Lok Lam Yim, which is a reply to Greavesâs cluelessness paper:
Interestingâthatâs fairly similar to the counterarguments I gave for the same case here:
Your critique of the conception example might be fair actually. I do think itâs possible to think up circumstances of genuine âsimple cluelessnessâ though where, from a subjective standpoint, we really donât have any reasons to think one option may be better or worse than the alternative.
For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesnât seem to be any point stressing about this decision (assuming there isnât some obvious consideration to take into account), although it is certainly possible that choosing the left chair over the right chair could be a terrible decision ex post. So I do think this decision is qualitatively different to donating to AMF.
However I think the reason why Greaves introduces the distinction between complex and simple cluelessness is to save consequentialism from Lenmanâs cluelessness critique (going by hazy memory here). If a much wider class of decisions suffer from complex cluelessness than Greaves originally thought, this could prove problematic for her defence. Having said that, I do still think that something like working on AI alignment probably avoids complex cluelessness for the reasons I give in the post, so I think Greavesâ work has been useful.
So far, I feel Iâve been able to counter any proposed example, and I predict I would be able to do so for any future example (unless itâs the sort of thing that would never happen in real life, or the information given is less than one would have in real life).
Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:
One chair might be closer, so walking to it expends less energy and/âor takes less time, which has various knock-on effects
One chair will be closer to some other object in the world, making it easier for you to hear whatâs going on over there and for people over there to hear you, which could have various knock-on effects
One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever
I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness.
My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:
Youâll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
This seems like commonsense, and also is in line with the âsuspicious convergenceâ idea
Plans like âdonate to AMFâ were not highlighted to improve the very long-term future
Plans like âdonate to reduce AI x-riskâ were highlighted largely to improve the very long-term future
A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
But then thereâs also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so itâs not immediately obvious whether we should focus on the long-term or the short-term. This is where work like âThe Epistemic Challenge to Longtermismâ and âFormalising the âWashing Out Hypothesisââ become very useful.
Also, there are many situations where itâs not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
This is basically just a point about value of information and opportunity cost; it doesnât require a notion of absolute evidential symmetry
(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)
FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyoneâwhoâve thought about the topic think the cluelessness stuff is much more useful than I think it is (Iâm just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.
I think simple cluelessness is a subjective state. In reality one chair might be slightly older, but one can be fairly confident that it isnât worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesnât seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesnât seem too far-fetched to me (letâs say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the otherâotherwise they fall prey to paralysis. Admittedly I havenât read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.
EDIT: To be honest Iâm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we canât really form precise credences.
For me, Greavesâ work on cluelessness just highlighted to me a problem I didnât think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they donât discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that donât fall prey to problem, which might be the longtermist ones that Greaves puts forward (although Iâm uncertain about this).
I havenât read the relevant papers since last year, but I think I recall the idea being not just that we currently donât have a sense of what the long-term effects of an action are, but also that we basically canât gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are âutterly unpredictableââa much stronger claim that just that we currently have no real prediction.
(And I think that that idea is very problematic, as discussed elsewhere in this thread..)
(Iâm writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I donât think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesnât. This can be true whether you can or canât immediately see any difference in the expected value of the two âconcrete choicesâ (this is a term Iâm making up to exclude the choice to do further analysis).
E.g., I donât spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I donât explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values weâd guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I donât think the concept of âsimple cluelessnessâ is necessary, and I think itâs unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that thereâs something special about certain cases where one has extremely little knowledge, whereas really whatâs key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have