I don’t want to start a pointless industry of alternatively ‘shooting down’ & refining purported cases of simple cluelessness, but just for fun here is another reason for why our cluelessness regarding “conceiving a child on Tuesday vs. Wednesday” really is complex:
Shifting the time of conception by one day (ignoring the empirical complication pointed out by Denise below) also shifts the probability distribution of birth date by weekday, e.g. whether the baby’s birth occurs on a Tuesday or Wednesday. However, for all we know the weekday of birth has a systematic effect on birth-related health outcomes of mother or child. For instance, consider some medical complication occurring during labor with weekday-independent probability, which needs to be treated in a hospital. We might then worry that on a Wednesday healthcare workers will tend to be more overworked, and so slightly more likely to make mistakes, than on a Tuesday (because many of them will have had the weekend off and so on Wednesday they’ve been through a larger period of workdays without significant time off). On the other hand, we might think that people are reluctant to go to a hospital on a weekend such that there’ll be a “rush” on hospitals on Mondays, which takes until Wednesday to “clear”—making in fact Monday or Tuesday more stressful for healthcare workers. And so on and so on …
I’m sure many of these studies are terrible but their existence illustrates that it might be pretty hard to justify an epistemic state that is committed to the effect of different weekdays exactly canceling out.)
((It doesn’t help if we could work out the net effect on all health outcomes at birth, say b/c we can look at empirical data from hospitals. Presumably some non-zero net effect on e.g. whether or not we increase the total human population by 1 at an earlier time would remain, and then we’re caught in the ‘standard’ complex cluelessness problem of working out whether the long-term effects of this are net positive or net negative etc.))
I’m wondering if a better definition of simple cluelessness would be something like: “While the effects don’t ‘cancel out’, we are justified in believing that their net effect will be small compared to differences in short-term effects.”
I’m wondering if a better definition of simple cluelessness would be something like: “While the effects don’t ‘cancel out’, we are justified in believing that their net effect will be small compared to differences in short-term effects.”
I think that that’s clearly a good sort of sentence to say. But:
I don’t think we need the simple vs complex cluelessness” idea to say that
I really don’t want us to use the term “clueless” for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying “utterly unpredictable” here).
I don’t want us to have two terms that (a) sound like they’re meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why can’t we simply talk in terms of having more or less “resilient” or “justified” credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we could’ve done with that time or those resources otherwise?
It seems like an approach that’s more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
It’s been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/comments (so I didn’t e.g. read Trammell’s paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply can’t say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, can’t say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are “utterly unpredictable”.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsney’s epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We can’t just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like there’s something important that’s just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.
I don’t want to start a pointless industry of alternatively ‘shooting down’ & refining purported cases of simple cluelessness, but just for fun here is another reason for why our cluelessness regarding “conceiving a child on Tuesday vs. Wednesday” really is complex:
Shifting the time of conception by one day (ignoring the empirical complication pointed out by Denise below) also shifts the probability distribution of birth date by weekday, e.g. whether the baby’s birth occurs on a Tuesday or Wednesday. However, for all we know the weekday of birth has a systematic effect on birth-related health outcomes of mother or child. For instance, consider some medical complication occurring during labor with weekday-independent probability, which needs to be treated in a hospital. We might then worry that on a Wednesday healthcare workers will tend to be more overworked, and so slightly more likely to make mistakes, than on a Tuesday (because many of them will have had the weekend off and so on Wednesday they’ve been through a larger period of workdays without significant time off). On the other hand, we might think that people are reluctant to go to a hospital on a weekend such that there’ll be a “rush” on hospitals on Mondays, which takes until Wednesday to “clear”—making in fact Monday or Tuesday more stressful for healthcare workers. And so on and so on …
(This is all made up, but if I google for relevant terms I pretty quickly find studies such as Weekday of Surgery Affects Postoperative Complications and Long-Term Survival of Chinese Gastric Cancer Patients after Curative Gastrectomy or Outcomes are Worse in US Patients Undergoing Surgery on Weekends Compared With Weekdays or Influence of weekday of surgery on operative
complications. An analysis of 25.000 surgical procedures or …
I’m sure many of these studies are terrible but their existence illustrates that it might be pretty hard to justify an epistemic state that is committed to the effect of different weekdays exactly canceling out.)
((It doesn’t help if we could work out the net effect on all health outcomes at birth, say b/c we can look at empirical data from hospitals. Presumably some non-zero net effect on e.g. whether or not we increase the total human population by 1 at an earlier time would remain, and then we’re caught in the ‘standard’ complex cluelessness problem of working out whether the long-term effects of this are net positive or net negative etc.))
I’m wondering if a better definition of simple cluelessness would be something like: “While the effects don’t ‘cancel out’, we are justified in believing that their net effect will be small compared to differences in short-term effects.”
I think that that’s clearly a good sort of sentence to say. But:
I don’t think we need the simple vs complex cluelessness” idea to say that
I really don’t want us to use the term “clueless” for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying “utterly unpredictable” here).
I don’t want us to have two terms that (a) sound like they’re meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.
(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)
Why can’t we simply talk in terms of having more or less “resilient” or “justified” credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we could’ve done with that time or those resources otherwise?
It seems like an approach that’s more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.
It’s been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/comments (so I didn’t e.g. read Trammell’s paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:
Many people seem to have taken the cluelessness stuff as an argument that we simply can’t say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
Greaves seems to instead want to argue that we basically, at least currently, can’t say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
See e.g. here, where Greaves says the long-term effects of short-termist interventions are “utterly unpredictable”.
My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsney’s epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We can’t just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).
Currently, this is probably the main topic in EA where it feels to me like there’s something important that’s just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.