Iâm wondering if a better definition of simple cluelessness would be something like: âWhile the effects donât âcancel outâ, we are justified in believing that their net effect will be small compared to differences in short-term effects.â
I think that thatâs clearly a good sort of sentence to say. But:
I donât think we need the simple vs complex cluelessnessâ idea to say that
I really donât want us to use the term âcluelessâ for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying âutterly unpredictableâ here).
I donât want us to have two terms that (a) sound like theyâre meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why canât we simply talk in terms of having more or less âresilientâ or âjustifiedâ credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we couldâve done with that time or those resources otherwise?
It seems like an approach thatâs more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
Itâs been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/âcomments (so I didnât e.g. read Trammellâs paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply canât say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, canât say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are âutterly unpredictableâ.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsneyâs epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We canât just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like thereâs something important thatâs just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.
I think that thatâs clearly a good sort of sentence to say. But:
I donât think we need the simple vs complex cluelessnessâ idea to say that
I really donât want us to use the term âcluelessâ for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying âutterly unpredictableâ here).
I donât want us to have two terms that (a) sound like theyâre meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why canât we simply talk in terms of having more or less âresilientâ or âjustifiedâ credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we couldâve done with that time or those resources otherwise?
It seems like an approach thatâs more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
Itâs been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/âcomments (so I didnât e.g. read Trammellâs paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply canât say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, canât say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are âutterly unpredictableâ.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsneyâs epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We canât just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like thereâs something important thatâs just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.