GiveWell’s estimates use real, tangible, collected data.
I wonder if you have come across the literature on complex cluelessness? GiveWell may use some real, tangible data, but they are missing lots of highly-relevant and important data, most obviously relating to the longer-term consequences of the health interventions. For example they don’t know what the long-term population effects will be nor the corresponding moral value of these population effects. It also really doesn’t seem fair to me to just assume that this would be zero in expectation, which GiveWell implicitly does. It seems highly plausible in fact that these longer-term effects could swamp the near-term effects.
I personally still have to think through cluelessness more to decide what conclusions to draw from it (as does the rest of the EA movement as I don’t think everyone has caught on to just how important this problem is!). As it stands however it has caused me to move away somewhat from cost-benefit analyses that makes use of ‘real, tangible data’ and towards arguments that are supposedly ‘more robust’ to a range of different assumptions and inputs which, funnily enough, I think may lead to certain longtermist interventions.
I appreciate that this is a starkly different view to you and I would be happy to hear your thoughts here!
I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don’t think cluelessness is particularly illuminating concept. I view it as a variant of “we can’t predict the future.” So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there’s going to be problems.
I would propose to resolve cluelessness as follows: Let’s admit we can’t predict the future. Our focus should instead be on error-correction. Our actions will have consequences—both intended and unintended, good and bad. The best we can do is foster a critical, rational environment where we can discuss the negatives consequences, solve them, and repeat. (I know this answer will sound glib, but I’m quite sincere.)
I do think it’s far more illuminating than “we can’t predict the future”.
Really complex cluelessness is saying OK great you’ve carried out a CBA/CEA but you’ve omitted/ignored effects from the analysis that we:
Have good reason to expect will occur
Have good reason to suspect are sufficiently important such that they could change the sign of your final number if properly included in your analysis
If the above factors are in fact true in the case of GiveWell (I think they probably are) then I don’t think GiveWell CBAs are all that useful and the original point you were trying to make—that GiveWell analysis is obviously superior because it makes use of data—sort of breaks down because, quite simply, the data has a massive, gaping hole in it. This is not to criticise GiveWell in the slightest, it’s just to acknowledge the monstrous task they’re up against.
Correct me if I’m wrong but what you seem to be arguing is that we’re actually complexly clueless about everything, so we may as well just ignore the problem. I actually don’t think this is true—we may be clueless about everything but not necessarily in a complex way. Consider the promotion of philosophy in schools, a class of interventions that I have written about. I’m not sure if these are definitely the best interventions (reception to my post was fairly lukewarm), but I also don’t think we are complexly clueless about their effects in the same way that we are about the effects of distributing bednets. This is because it’s just quite hard to think up reasons why it might be bad to promote philosophy in schools. Sure it could be the case that promoting philosophy in schools makes something bad happen, but I don’t really have much of a reason to entertain that possibility if I can’t think of a specific effect that fulfils the two factors I listed above. In the case of distributing bednets we are pretty certain there will be population effects, we are pretty certain these will be very significant in moral terms, but we don’t have much of a clue about the magnitude or even sign of this moral value. Therefore I would say we are complexly clueless about distributing bednets but only simply clueless about promoting philosophy in schools—only complex cluelessness is really a problem according to Greaves.
To clarify I actually think there will be short-termist interventions that don’t run into the problem of complex cluelessness (to give just one example—saving an animal from a factory farm), so I’m not attempting to prove longtermism here, I’m only countering your claim that using data in a CBA/CEA is necessarily better than engaging in an alternative method of analysis.
One response might be that if there are unintended negative consequences, we can address those later or separately. Sometimes it will be the case that optimizing for some positive effect optimizes a negative effect, but usually these won’t correspond. So, the most cost-effective ways to save lives won’t be the ways that maximize the negative effects of population growth—those same negative effects will be cheaper to obtain through something other than population growth -, and we can probably find more cost-effective ways to offset those effects. I wrote a post about hedging like this.
I wonder if you have come across the literature on complex cluelessness? GiveWell may use some real, tangible data, but they are missing lots of highly-relevant and important data, most obviously relating to the longer-term consequences of the health interventions. For example they don’t know what the long-term population effects will be nor the corresponding moral value of these population effects. It also really doesn’t seem fair to me to just assume that this would be zero in expectation, which GiveWell implicitly does. It seems highly plausible in fact that these longer-term effects could swamp the near-term effects.
I personally still have to think through cluelessness more to decide what conclusions to draw from it (as does the rest of the EA movement as I don’t think everyone has caught on to just how important this problem is!). As it stands however it has caused me to move away somewhat from cost-benefit analyses that makes use of ‘real, tangible data’ and towards arguments that are supposedly ‘more robust’ to a range of different assumptions and inputs which, funnily enough, I think may lead to certain longtermist interventions.
I appreciate that this is a starkly different view to you and I would be happy to hear your thoughts here!
I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don’t think cluelessness is particularly illuminating concept. I view it as a variant of “we can’t predict the future.” So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there’s going to be problems.
I would propose to resolve cluelessness as follows: Let’s admit we can’t predict the future. Our focus should instead be on error-correction. Our actions will have consequences—both intended and unintended, good and bad. The best we can do is foster a critical, rational environment where we can discuss the negatives consequences, solve them, and repeat. (I know this answer will sound glib, but I’m quite sincere.)
I do think it’s far more illuminating than “we can’t predict the future”.
Really complex cluelessness is saying OK great you’ve carried out a CBA/CEA but you’ve omitted/ignored effects from the analysis that we:
Have good reason to expect will occur
Have good reason to suspect are sufficiently important such that they could change the sign of your final number if properly included in your analysis
If the above factors are in fact true in the case of GiveWell (I think they probably are) then I don’t think GiveWell CBAs are all that useful and the original point you were trying to make—that GiveWell analysis is obviously superior because it makes use of data—sort of breaks down because, quite simply, the data has a massive, gaping hole in it. This is not to criticise GiveWell in the slightest, it’s just to acknowledge the monstrous task they’re up against.
Correct me if I’m wrong but what you seem to be arguing is that we’re actually complexly clueless about everything, so we may as well just ignore the problem. I actually don’t think this is true—we may be clueless about everything but not necessarily in a complex way. Consider the promotion of philosophy in schools, a class of interventions that I have written about. I’m not sure if these are definitely the best interventions (reception to my post was fairly lukewarm), but I also don’t think we are complexly clueless about their effects in the same way that we are about the effects of distributing bednets. This is because it’s just quite hard to think up reasons why it might be bad to promote philosophy in schools. Sure it could be the case that promoting philosophy in schools makes something bad happen, but I don’t really have much of a reason to entertain that possibility if I can’t think of a specific effect that fulfils the two factors I listed above. In the case of distributing bednets we are pretty certain there will be population effects, we are pretty certain these will be very significant in moral terms, but we don’t have much of a clue about the magnitude or even sign of this moral value. Therefore I would say we are complexly clueless about distributing bednets but only simply clueless about promoting philosophy in schools—only complex cluelessness is really a problem according to Greaves.
To clarify I actually think there will be short-termist interventions that don’t run into the problem of complex cluelessness (to give just one example—saving an animal from a factory farm), so I’m not attempting to prove longtermism here, I’m only countering your claim that using data in a CBA/CEA is necessarily better than engaging in an alternative method of analysis.
One response might be that if there are unintended negative consequences, we can address those later or separately. Sometimes it will be the case that optimizing for some positive effect optimizes a negative effect, but usually these won’t correspond. So, the most cost-effective ways to save lives won’t be the ways that maximize the negative effects of population growth—those same negative effects will be cheaper to obtain through something other than population growth -, and we can probably find more cost-effective ways to offset those effects. I wrote a post about hedging like this.
Interesting, thanks for sharing that post. I will have to read it more carefully to fully digest it!