Nice post Joey, thanks for laying it out so clearly.
I agree with almost all of this. I find it interesting to think more about which domains /ā dimensions Iād prefer to push towards prioritization vs. pluralism:
Speaking loosely, I think EA could push more towards pluralism for career decisions (where personal fit, talent absorbency and specialization are important), and FAW/āGHD/āGCR cause prioritization (where I at least feel swamped by uncertainty). But Iām pretty unsure on where to shift on the margin in other domains like GiveWell style GHD direct delivery (where money is ~fungible, and comparisons can be meaningful).
e.g. I suspect Iām more willing to prioritize than you on bednets vs. therapy. I think you /ā AIM are more positive than me about therapy (as currently delivered) on the merits. Sure, thereās a lot of uncertainty, but having spent a bit of time with the CEAs, I just find it real hard to get to therapy being more cost-effective than bednets in high burden areas.
you could pretty easily imagine a GW-like charity evaluator that ranks income as x4 as important as GW does coming to pretty different but still highly compelling top charities
I agree moral weights are one of the more uncertain parameters, though I think the range of reasonable disagreement given current evidence is a bit less wide than implied here. Iād love to see someone dive deep on the question and actually make the case that we should be using moral weights for income 4x higher vs. health, rather than theyāre plausible.
I guess a general theme is that I worry about a tendency to string together lots of āplausibleā assumptions without defending them as their best guess, and that eroding a prioritization mindset. I think youād probably agree with that in general, but suspect we have different practical views on some specifics.
Nice post Joey, thanks for laying it out so clearly.
I agree with almost all of this. I find it interesting to think more about which domains /ā dimensions Iād prefer to push towards prioritization vs. pluralism:
Speaking loosely, I think EA could push more towards pluralism for career decisions (where personal fit, talent absorbency and specialization are important), and FAW/āGHD/āGCR cause prioritization (where I at least feel swamped by uncertainty). But Iām pretty unsure on where to shift on the margin in other domains like GiveWell style GHD direct delivery (where money is ~fungible, and comparisons can be meaningful).
e.g. I suspect Iām more willing to prioritize than you on bednets vs. therapy. I think you /ā AIM are more positive than me about therapy (as currently delivered) on the merits. Sure, thereās a lot of uncertainty, but having spent a bit of time with the CEAs, I just find it real hard to get to therapy being more cost-effective than bednets in high burden areas.
I agree moral weights are one of the more uncertain parameters, though I think the range of reasonable disagreement given current evidence is a bit less wide than implied here. Iād love to see someone dive deep on the question and actually make the case that we should be using moral weights for income 4x higher vs. health, rather than theyāre plausible.
I guess a general theme is that I worry about a tendency to string together lots of āplausibleā assumptions without defending them as their best guess, and that eroding a prioritization mindset. I think youād probably agree with that in general, but suspect we have different practical views on some specifics.