On cluster Vs sequence, I guess I don’t really understand what the important distinction here is supposed to be. Sometimes, you need to put various assumptions together to reach a conclusion—cost-effectiveness analysis is a salient example. However, for each specific premise, you could think about different pieces of information that would change your view on it. Aren’t these just the sequence and cluster bits, respectively? Okay, so you need to do both. Hence, if someone were to say ‘that’s wrong—you’re using sequence thinking’ and I think the correct response is to look at them blankly and say ‘um, okay… So what exactly are you saying the problem is’?
On cost-effectiveness, I’m going to assume that this is what GiveWell (and others) should be optimising for. And if they aren’t optimising for cost-effectiveness then, well, what are they trying to do? I can’t see any statement of what they are aiming for instead.
Also, I don’t understand why trying to maximize cost-effectiveness will fail to do so. Of course, you shouldn’t do naive cost-effectiveness, just like you probably shouldn’t be naive in general.
I appreciate that putting numbers on things can sometimes feel like false precision. But that’s a reason to use confidence intervals. (Also, as the saying goes, “if it’s worth doing, it’s worth doing with made up numbers”). Clearly, GiveWell do need to do cost-effectiveness assessments, even if just informally and in their heads, to decide what their recommendations are. But the part that’s just as crucial as sharing the numbers is explaining the reasons and evidence for your decision so people can check them and see if they agree. The point of this post is to highlight an important part of the analysis that was missing.
Thanks for collecting those quotes here. Because of some of what you quoted, I was confused for a while as to how much weight they actually put on their cost-effectiveness estimates. Elie’s appearance on Spencer Greenberg’s Clearer Thinking Podcast should be the most recent view on the issue.
In my experience, GiveWell is one of the few institutions that’s trying to make decisions based on cost-effectiveness analyses and trying to do that in a consistent and principled way. GiveWell’s cost-effectiveness estimates are not the only input into our decisions to fund programs, there are some other factors, but they’re certainly 80% plus of the case. I think we’re relatively unique in that way.
I think the quote is reasonably clear in it’s argument: maximizing cost-effectiveness through explicit EV calculation is not robust to uncertainty in our estimates. More formally, if our distribution of estimates is misspecified, then incorporating strength of evidence as a factor beyond explicit EV calculation helps limit how much weight we place on any (potentially misspecified) estimate. This is Knightian uncertainty, and the optimal decisions under Knightian uncertainty place more weight on factors with less risk of misspecification (ie stronger evidence).
You say that a “cluster bit” where you think about where evidence is coming from can account for this. I don’t think that’s true. Ultimately, your uncertainty will be irrelevant in determining the final Fermi estimate. Saying that you can “think about” sources of uncertainty doesn’t matter if that thinking doesn’t cash out into a decision criterion!
For example, if you estimate an important quantity as q = 1 with a confidence band of (-99, 101), that will give you the same cost-effectiveness estimate as if q had the confidence band of (0, 2). Even though the latter case is much more robust, you don’t have any way to minimize the effect of uncertainty in the former case. You do have the ability to place confidence bands around your cost-effectiveness estimate, but in every instance I’ve seen, confidence bands are pure lip service and the point estimate is the sole decision criterion. I do not see a confidence band in your estimate (sorry if I missed it) so that doesn’t seem like the most robust defense?
Ah, I was waiting for someone to bring these up!
On cluster Vs sequence, I guess I don’t really understand what the important distinction here is supposed to be. Sometimes, you need to put various assumptions together to reach a conclusion—cost-effectiveness analysis is a salient example. However, for each specific premise, you could think about different pieces of information that would change your view on it. Aren’t these just the sequence and cluster bits, respectively? Okay, so you need to do both. Hence, if someone were to say ‘that’s wrong—you’re using sequence thinking’ and I think the correct response is to look at them blankly and say ‘um, okay… So what exactly are you saying the problem is’?
On cost-effectiveness, I’m going to assume that this is what GiveWell (and others) should be optimising for. And if they aren’t optimising for cost-effectiveness then, well, what are they trying to do? I can’t see any statement of what they are aiming for instead.
Also, I don’t understand why trying to maximize cost-effectiveness will fail to do so. Of course, you shouldn’t do naive cost-effectiveness, just like you probably shouldn’t be naive in general.
I appreciate that putting numbers on things can sometimes feel like false precision. But that’s a reason to use confidence intervals. (Also, as the saying goes, “if it’s worth doing, it’s worth doing with made up numbers”). Clearly, GiveWell do need to do cost-effectiveness assessments, even if just informally and in their heads, to decide what their recommendations are. But the part that’s just as crucial as sharing the numbers is explaining the reasons and evidence for your decision so people can check them and see if they agree. The point of this post is to highlight an important part of the analysis that was missing.
Thanks for collecting those quotes here. Because of some of what you quoted, I was confused for a while as to how much weight they actually put on their cost-effectiveness estimates. Elie’s appearance on Spencer Greenberg’s Clearer Thinking Podcast should be the most recent view on the issue.
(Time at the start of the quote: 29:14).
I think the quote is reasonably clear in it’s argument: maximizing cost-effectiveness through explicit EV calculation is not robust to uncertainty in our estimates. More formally, if our distribution of estimates is misspecified, then incorporating strength of evidence as a factor beyond explicit EV calculation helps limit how much weight we place on any (potentially misspecified) estimate. This is Knightian uncertainty, and the optimal decisions under Knightian uncertainty place more weight on factors with less risk of misspecification (ie stronger evidence).
You say that a “cluster bit” where you think about where evidence is coming from can account for this. I don’t think that’s true. Ultimately, your uncertainty will be irrelevant in determining the final Fermi estimate. Saying that you can “think about” sources of uncertainty doesn’t matter if that thinking doesn’t cash out into a decision criterion!
For example, if you estimate an important quantity as q = 1 with a confidence band of (-99, 101), that will give you the same cost-effectiveness estimate as if q had the confidence band of (0, 2). Even though the latter case is much more robust, you don’t have any way to minimize the effect of uncertainty in the former case. You do have the ability to place confidence bands around your cost-effectiveness estimate, but in every instance I’ve seen, confidence bands are pure lip service and the point estimate is the sole decision criterion. I do not see a confidence band in your estimate (sorry if I missed it) so that doesn’t seem like the most robust defense?