I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don’t think cluelessness is particularly illuminating concept. I view it as a variant of “we can’t predict the future.” So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there’s going to be problems.
I would propose to resolve cluelessness as follows: Let’s admit we can’t predict the future. Our focus should instead be on error-correction. Our actions will have consequences—both intended and unintended, good and bad. The best we can do is foster a critical, rational environment where we can discuss the negatives consequences, solve them, and repeat. (I know this answer will sound glib, but I’m quite sincere.)
I do think it’s far more illuminating than “we can’t predict the future”.
Really complex cluelessness is saying OK great you’ve carried out a CBA/CEA but you’ve omitted/ignored effects from the analysis that we:
Have good reason to expect will occur
Have good reason to suspect are sufficiently important such that they could change the sign of your final number if properly included in your analysis
If the above factors are in fact true in the case of GiveWell (I think they probably are) then I don’t think GiveWell CBAs are all that useful and the original point you were trying to make—that GiveWell analysis is obviously superior because it makes use of data—sort of breaks down because, quite simply, the data has a massive, gaping hole in it. This is not to criticise GiveWell in the slightest, it’s just to acknowledge the monstrous task they’re up against.
Correct me if I’m wrong but what you seem to be arguing is that we’re actually complexly clueless about everything, so we may as well just ignore the problem. I actually don’t think this is true—we may be clueless about everything but not necessarily in a complex way. Consider the promotion of philosophy in schools, a class of interventions that I have written about. I’m not sure if these are definitely the best interventions (reception to my post was fairly lukewarm), but I also don’t think we are complexly clueless about their effects in the same way that we are about the effects of distributing bednets. This is because it’s just quite hard to think up reasons why it might be bad to promote philosophy in schools. Sure it could be the case that promoting philosophy in schools makes something bad happen, but I don’t really have much of a reason to entertain that possibility if I can’t think of a specific effect that fulfils the two factors I listed above. In the case of distributing bednets we are pretty certain there will be population effects, we are pretty certain these will be very significant in moral terms, but we don’t have much of a clue about the magnitude or even sign of this moral value. Therefore I would say we are complexly clueless about distributing bednets but only simply clueless about promoting philosophy in schools—only complex cluelessness is really a problem according to Greaves.
To clarify I actually think there will be short-termist interventions that don’t run into the problem of complex cluelessness (to give just one example—saving an animal from a factory farm), so I’m not attempting to prove longtermism here, I’m only countering your claim that using data in a CBA/CEA is necessarily better than engaging in an alternative method of analysis.
I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don’t think cluelessness is particularly illuminating concept. I view it as a variant of “we can’t predict the future.” So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there’s going to be problems.
I would propose to resolve cluelessness as follows: Let’s admit we can’t predict the future. Our focus should instead be on error-correction. Our actions will have consequences—both intended and unintended, good and bad. The best we can do is foster a critical, rational environment where we can discuss the negatives consequences, solve them, and repeat. (I know this answer will sound glib, but I’m quite sincere.)
I do think it’s far more illuminating than “we can’t predict the future”.
Really complex cluelessness is saying OK great you’ve carried out a CBA/CEA but you’ve omitted/ignored effects from the analysis that we:
Have good reason to expect will occur
Have good reason to suspect are sufficiently important such that they could change the sign of your final number if properly included in your analysis
If the above factors are in fact true in the case of GiveWell (I think they probably are) then I don’t think GiveWell CBAs are all that useful and the original point you were trying to make—that GiveWell analysis is obviously superior because it makes use of data—sort of breaks down because, quite simply, the data has a massive, gaping hole in it. This is not to criticise GiveWell in the slightest, it’s just to acknowledge the monstrous task they’re up against.
Correct me if I’m wrong but what you seem to be arguing is that we’re actually complexly clueless about everything, so we may as well just ignore the problem. I actually don’t think this is true—we may be clueless about everything but not necessarily in a complex way. Consider the promotion of philosophy in schools, a class of interventions that I have written about. I’m not sure if these are definitely the best interventions (reception to my post was fairly lukewarm), but I also don’t think we are complexly clueless about their effects in the same way that we are about the effects of distributing bednets. This is because it’s just quite hard to think up reasons why it might be bad to promote philosophy in schools. Sure it could be the case that promoting philosophy in schools makes something bad happen, but I don’t really have much of a reason to entertain that possibility if I can’t think of a specific effect that fulfils the two factors I listed above. In the case of distributing bednets we are pretty certain there will be population effects, we are pretty certain these will be very significant in moral terms, but we don’t have much of a clue about the magnitude or even sign of this moral value. Therefore I would say we are complexly clueless about distributing bednets but only simply clueless about promoting philosophy in schools—only complex cluelessness is really a problem according to Greaves.
To clarify I actually think there will be short-termist interventions that don’t run into the problem of complex cluelessness (to give just one example—saving an animal from a factory farm), so I’m not attempting to prove longtermism here, I’m only countering your claim that using data in a CBA/CEA is necessarily better than engaging in an alternative method of analysis.