A potential objection here is that the Austrian physician could in no way have foreseen that the infant they were called to tend to would later become a terrible dictator, so the physician should have done what seemed best given the information they could uncover. But this objection only highlights the difficulty presented by cluelessness. In a very literal sense, a physician in this position is clueless about what action would be best. Assessing only proximate consequences would provide some guidance about what action to take, but this guidance would not necessarily point to the action with the best consequences in the long run.
I think this example undermines, rather than supports, your point. Of course it’s possible the baby would have grown up to be Hitler. It’s also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.
A couple of general comments. There is already an academic literature of cluelessness and it’s known to some EAs. It would be helpful therefore if you make it clear what you’re doing that’s novel. I don’t mean this in a disparaging way. I simply can’t tell if you’re disagreeing with Greaves et al. or not. If you are, that’s potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you’re not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.
Additionally, I think it’s unhelpful to (re)invent new terminology without a good reason. I can’t tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you’d explained cluelueness using Greaves’ set up and then progressed from there as appropriate.
There is already an academic literature of cluelessness and it’s known to some EAs. It would be helpful therefore if you make it clear what you’re doing that’s novel …
Do you know of worthwhile work on this beyond Greaves 2016? (Please point me to it, if you do!)
Greaves 2016 is the most useful academic work I’ve come across on this question; I was convinced by their arguments against Lenman 2000.
I stated my goal at the top of the piece.
I would much have preferred it if you’d explained cluelueness using Greaves’ set up and then progressed from there as appropriate.
I don’t think Greaves presented an analogous terminology?
“Flow-through effects” & “knock-on effects” have been used previously, but they don’t distinguish between temporally near & temporally distant effects. That distinction seems interesting, so I decided to not those terms.
This seems like a case of what Greaves calls simple cluelessness.
I’m fuzzy on Greaves’ distinction between simple & complex cluelessness. Greaves uses the notion of “systematic tendency” to draw out complex cluelessness from simple, but “This talk of ‘having some reasons’ and ‘systematic tendencies’ is not as precise as one would like;” (from p. 9 of Greaves 2016).
Perhaps it comes down to symmetry. When we notice that for every imagined consequence, there is an equal & opposite consequence that feels about as likely, we can consider our cluelessness “simple.” But when we can’t do this, our cluelessness is complex.
This criterion is unsatisfyingly subjective though, because it relies on our assessing the equal-opposite consequence as “about as likely,” plus relying on whether we are able to imagine an equal-opposite consequence or not.
I take Greaves’ distinction between simple and complex cluelessness to be in the symmetry (just as you seem to do). However, I believe that this symmetry consists in that we are evaluating the same consequences following from either an act A, or a refraining of act A. For every story of long-term consequences happening from performing act A, there is a parallel story of these consequences C happening from refraining to do A. Thus, we can invoke a specific Principle of Indifference, where we take the probabilities of the options to be equal, reflecting our ignorance. Thus, P(C|A) = P(C|~A), where C is a story of some long-term consequences of either performing or refraining from doing A.
In complex cases, this symmetry does not exist, because we’re trying to compare different consequences (C1, C2, .., Cn) resulting from the same act.
I think this example undermines, rather than supports, your point. Of course it’s possible the baby would have grown up to be Hitler. It’s also possible the baby would have grown up to be a great scientist. Hence, from the perspective of the doctor, who is presumably working on expected value and has no reason to think one special case is more likely than the other, these presumably just do cancel out. Hence the doctors looks the obvious causes. This seems like a case of what Greaves calls simple cluelessness.
A couple of general comments. There is already an academic literature of cluelessness and it’s known to some EAs. It would be helpful therefore if you make it clear what you’re doing that’s novel. I don’t mean this in a disparaging way. I simply can’t tell if you’re disagreeing with Greaves et al. or not. If you are, that’s potentially very interesting and I want to know what the disagreement exactly is so I can assess it and see if I want to take your side. If you’re not presenting a new line of thought, but just summarising or restating what others have said (perhaps in an effort to bring this information to new audiences, or just for your own benefit) you should say that instead so that people can better decided how closely to read it.
Additionally, I think it’s unhelpful to (re)invent new terminology without a good reason. I can’t tell the clear different between proximate, indirect and long-run consequences. I would much have preferred it if you’d explained cluelueness using Greaves’ set up and then progressed from there as appropriate.
Do you know of worthwhile work on this beyond Greaves 2016? (Please point me to it, if you do!)
Greaves 2016 is the most useful academic work I’ve come across on this question; I was convinced by their arguments against Lenman 2000.
I stated my goal at the top of the piece.
I don’t think Greaves presented an analogous terminology?
“Flow-through effects” & “knock-on effects” have been used previously, but they don’t distinguish between temporally near & temporally distant effects. That distinction seems interesting, so I decided to not those terms.
Thanks for the thoughtful comment :-)
I’m fuzzy on Greaves’ distinction between simple & complex cluelessness. Greaves uses the notion of “systematic tendency” to draw out complex cluelessness from simple, but “This talk of ‘having some reasons’ and ‘systematic tendencies’ is not as precise as one would like;” (from p. 9 of Greaves 2016).
Perhaps it comes down to symmetry. When we notice that for every imagined consequence, there is an equal & opposite consequence that feels about as likely, we can consider our cluelessness “simple.” But when we can’t do this, our cluelessness is complex.
This criterion is unsatisfyingly subjective though, because it relies on our assessing the equal-opposite consequence as “about as likely,” plus relying on whether we are able to imagine an equal-opposite consequence or not.
I take Greaves’ distinction between simple and complex cluelessness to be in the symmetry (just as you seem to do). However, I believe that this symmetry consists in that we are evaluating the same consequences following from either an act A, or a refraining of act A. For every story of long-term consequences happening from performing act A, there is a parallel story of these consequences C happening from refraining to do A. Thus, we can invoke a specific Principle of Indifference, where we take the probabilities of the options to be equal, reflecting our ignorance. Thus, P(C|A) = P(C|~A), where C is a story of some long-term consequences of either performing or refraining from doing A.
In complex cases, this symmetry does not exist, because we’re trying to compare different consequences (C1, C2, .., Cn) resulting from the same act.