Doing credible cost-effectiveness estimates of all the world’s top (by $ amount) philanthropists (who may plausibly make the list) seems very time-intensive.
Supposing the list became popular, I imagine people would commonly ask “Why is so-and-so not on the list?” and there’d be a need for a list of the most-asked-about-people-who-are-unexpectedly-not-on-the-list with justifications for why they are not on the list. After a few minutes of thinking about it, I’m still not sure how to avoid this. Figuring out how to celebrate top philanthropists (by impact) without claiming to be exhaustive and having people disagree with the rankings seems hard.
When we evaluate people who don’t make the list, we can maintain pages for them on the site showing what we do know about their donations, so that a search would surface their page even if they’re not on the list. Such a page would essentially explain why they’re not on the list by showing the donations we know about and which recipients we’ve evaluated vs. those who we’ve assigned default effectiveness values for their category.
I think we can possibly offload some of the research work on people who think we’re wrong about who is on the list, by being very willing to update our data if anyone sends us credible evidence about any donation that we missed, or persuasive evidence about the effectiveness of any org. The existence of donations seems way easier to verify than to discover. Maybe the potential list-members themselves would send us a lot of this data from alt accounts.
I think Impact List does want to present itself as a best-effort attempt at being comprehensive. We’ll acknowledge that of course we’ve missed things, but that it’s a hard problem and no one has come close to doing it better. Combined with our receptivity to submitted data, my guess is that most people would be OK with that (conditional on them being OK with how we rank people who are on the list).
Misc thoughts:
Doing credible cost-effectiveness estimates of all the world’s top (by $ amount) philanthropists (who may plausibly make the list) seems very time-intensive.
Supposing the list became popular, I imagine people would commonly ask “Why is so-and-so not on the list?” and there’d be a need for a list of the most-asked-about-people-who-are-unexpectedly-not-on-the-list with justifications for why they are not on the list. After a few minutes of thinking about it, I’m still not sure how to avoid this. Figuring out how to celebrate top philanthropists (by impact) without claiming to be exhaustive and having people disagree with the rankings seems hard.
Yeah it will be very time intensive.
When we evaluate people who don’t make the list, we can maintain pages for them on the site showing what we do know about their donations, so that a search would surface their page even if they’re not on the list. Such a page would essentially explain why they’re not on the list by showing the donations we know about and which recipients we’ve evaluated vs. those who we’ve assigned default effectiveness values for their category.
I think we can possibly offload some of the research work on people who think we’re wrong about who is on the list, by being very willing to update our data if anyone sends us credible evidence about any donation that we missed, or persuasive evidence about the effectiveness of any org. The existence of donations seems way easier to verify than to discover. Maybe the potential list-members themselves would send us a lot of this data from alt accounts.
I think Impact List does want to present itself as a best-effort attempt at being comprehensive. We’ll acknowledge that of course we’ve missed things, but that it’s a hard problem and no one has come close to doing it better. Combined with our receptivity to submitted data, my guess is that most people would be OK with that (conditional on them being OK with how we rank people who are on the list).