Hey Simon, I remain slightly confused about this element of the conversation. I take you to mean that, since we base our assessment mostly on HLI’s work, and since we draw different conclusions from HLI’s work than you think are reasonable, we should reassess StrongMinds on that basis. Is that right?
If so, I do look forward to your thoughts on the HLI analysis, but in the meantime I’d be curious to get a sense of your personal levels of confidence here — what does a distribution of your beliefs over cost-effectiveness for StrongMinds look like?
since we base our assessment mostly on HLI’s work, and since we draw different conclusions from HLI’s work than you think are reasonable, we should reassess StrongMinds on that basis. Is that right
I’m not sure exactly what you’ve done, so it’s hard for me to comment precisely. I’m just struggling to see how you can be confident in a “6x as effective as GD” conclusion.
what does a distribution of your beliefs over cost-effectiveness for StrongMinds look like?
So there are two sides to this:
Is my confidence in HLI’s philisophical views. I have both spoken to Joel and read all their materials several times and thinkI understand their views. I am sure I do not fully agree with them and I’m not sure how much I believe them. I’d put myself at roughly 30% that I agree with their general philosophy. This is important because how cost-effective you believe StrongMinds are is quite sensitive to philisophical assumptions. (I plan on expanding upon this when discussing HLI)
Under HLI’s philosophical assumptions, I think I’m roughly speaking:
10% SM is 4-8x as good at GiveDirectly 25% SM is 1-4x as good as GiveDirectly 35% SM is 0.5-1x as good as GiveDirectly 30% SM not effective at all
So roughly speaking under HLI’s assumptions I think StrongMinds is roughly as good as GiveDirectly.
I think you will probably say on this basis that you’d still be recommending StrongMinds based on your risk-neutral principle but I think this underestimates quite how uncertain I would expect people to be in the HLI worldview. (I also disagree with being risk-neutral, but I suspect that’s a discussion for another day!)
Hey Simon, I remain slightly confused about this element of the conversation. I take you to mean that, since we base our assessment mostly on HLI’s work, and since we draw different conclusions from HLI’s work than you think are reasonable, we should reassess StrongMinds on that basis. Is that right?
If so, I do look forward to your thoughts on the HLI analysis, but in the meantime I’d be curious to get a sense of your personal levels of confidence here — what does a distribution of your beliefs over cost-effectiveness for StrongMinds look like?
I’m not sure exactly what you’ve done, so it’s hard for me to comment precisely. I’m just struggling to see how you can be confident in a “6x as effective as GD” conclusion.
So there are two sides to this:
Is my confidence in HLI’s philisophical views. I have both spoken to Joel and read all their materials several times and thinkI understand their views. I am sure I do not fully agree with them and I’m not sure how much I believe them. I’d put myself at roughly 30% that I agree with their general philosophy. This is important because how cost-effective you believe StrongMinds are is quite sensitive to philisophical assumptions. (I plan on expanding upon this when discussing HLI)
Under HLI’s philosophical assumptions, I think I’m roughly speaking:
10% SM is 4-8x as good at GiveDirectly
25% SM is 1-4x as good as GiveDirectly
35% SM is 0.5-1x as good as GiveDirectly
30% SM not effective at all
So roughly speaking under HLI’s assumptions I think StrongMinds is roughly as good as GiveDirectly.
I think you will probably say on this basis that you’d still be recommending StrongMinds based on your risk-neutral principle but I think this underestimates quite how uncertain I would expect people to be in the HLI worldview. (I also disagree with being risk-neutral, but I suspect that’s a discussion for another day!)