Iām wondering if a better definition of simple cluelessness would be something like: āWhile the effects donāt ācancel outā, we are justified in believing that their net effect will be small compared to differences in short-term effects.ā
I think that thatās clearly a good sort of sentence to say. But:
I donāt think we need the simple vs complex cluelessnessā idea to say that
I really donāt want us to use the term ācluelessā for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying āutterly unpredictableā here).
I donāt want us to have two terms that (a) sound like theyāre meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why canāt we simply talk in terms of having more or less āresilientā or ājustifiedā credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we couldāve done with that time or those resources otherwise?
It seems like an approach thatās more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
Itās been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/ācomments (so I didnāt e.g. read Trammellās paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply canāt say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, canāt say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are āutterly unpredictableā.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsneyās epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We canāt just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like thereās something important thatās just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.
I think that thatās clearly a good sort of sentence to say. But:
I donāt think we need the simple vs complex cluelessnessā idea to say that
I really donāt want us to use the term ācluelessā for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying āutterly unpredictableā here).
I donāt want us to have two terms that (a) sound like theyāre meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why canāt we simply talk in terms of having more or less āresilientā or ājustifiedā credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we couldāve done with that time or those resources otherwise?
It seems like an approach thatās more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
Itās been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/ācomments (so I didnāt e.g. read Trammellās paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply canāt say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, canāt say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are āutterly unpredictableā.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsneyās epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We canāt just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like thereās something important thatās just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.