But I think that the willingness to pay from Linch is based on accounting for future lives, rather than the kind of currently-alive-human-life-equivalent-saved figure that you’re looking for. (@Linch, please do correct me if I’m wrong!)
I think the understanding is based on how many $$s the longtermist/x-risk portion of EAs have access to, and then trying to rationally allocate resources according to that constraint. I’m not entirely sure what you mean by “accounting for future lives,” but yes, there’s an implicit assumption that under no realistic ranges of empirical uncertainty would it make sense to e.g. donate to AMF over longtermist interventions.
A moderate penalty to my numbers (from a presentist lens) is that at least some of the interventions I’m most excited about on the margin are from a civilizational resilience/recovery angle. However, I don’t think this is a large effectiveness penalty, since many other people are similarly or much more excited on the margin about AI risk interventions (which has much more the property that either approximately everybody dies or approximately no one dies).
So, I don’t think elifland’s analysis here is clearly methodologically wrong. Even though my numbers (and other analysis like mine) were based on the assumption that longtermist $$s were used for longtermist goals, it could still be the case that they are more effective for preventing deaths of existing people than existing global health interventions are. At least first order, it should not be that surprising. That is, global health interventions were chosen from the constraint of the first existing interventions with a large evidential base, whereas global catastrophic risk and existential-risk reducing interventions were chosen from (among others) the basis of dialing back ambiquity aversion and weirdness aversion to close to zero.
I think the main question/crux is how much you want to “penalize for (lack of) rigor.” Givewell-style analysis have years of dedicated work put into them. Much of my gut pulls grew out of an afternoon of relatively clear thinking (and then maybe a few more days of significantly-lower-quality thinking and conversations, etc, that adjusted my numbers somewhat but not hugely). I never really understood the principled solutions to problems like the optimizer’s curse and suspicious convergence.
PS: As an aside, I think it would be a good practice to add some kind of caption beneath your table stating how these are rough estimates, and perhaps in some cases even the only available estimate for that quantity. I’m pretty concerned about long citation trails in longtermist analysis, where very influential claims sometimes bottom out to some extremely rough and fragile estimates.
I think the understanding is based on how many $$s the longtermist/x-risk portion of EAs have access to, and then trying to rationally allocate resources according to that constraint. I’m not entirely sure what you mean by “accounting for future lives,” but yes, there’s an implicit assumption that under no realistic ranges of empirical uncertainty would it make sense to e.g. donate to AMF over longtermist interventions.
A moderate penalty to my numbers (from a presentist lens) is that at least some of the interventions I’m most excited about on the margin are from a civilizational resilience/recovery angle. However, I don’t think this is a large effectiveness penalty, since many other people are similarly or much more excited on the margin about AI risk interventions (which has much more the property that either approximately everybody dies or approximately no one dies).
So, I don’t think elifland’s analysis here is clearly methodologically wrong. Even though my numbers (and other analysis like mine) were based on the assumption that longtermist $$s were used for longtermist goals, it could still be the case that they are more effective for preventing deaths of existing people than existing global health interventions are. At least first order, it should not be that surprising. That is, global health interventions were chosen from the constraint of the first existing interventions with a large evidential base, whereas global catastrophic risk and existential-risk reducing interventions were chosen from (among others) the basis of dialing back ambiquity aversion and weirdness aversion to close to zero.
I think the main question/crux is how much you want to “penalize for (lack of) rigor.” Givewell-style analysis have years of dedicated work put into them. Much of my gut pulls grew out of an afternoon of relatively clear thinking (and then maybe a few more days of significantly-lower-quality thinking and conversations, etc, that adjusted my numbers somewhat but not hugely). I never really understood the principled solutions to problems like the optimizer’s curse and suspicious convergence.
Strongly agreed.