I’m curating this — thanks so much for putting it together, all! [1]
I think people are pretty confused about how prioritization or measurement of “good” can possibly happen, and how it happens in EA (and honestly, it’s hard to think about prioritization because prioritization is incredibly (emotionally) difficult), and even more confused about the differences between different approaches to prioritization — which means this is a really useful addition to that conversation.
I do wish there were a summary, though. I’ve copy-pasted Zoe’s summary below.
The post is pretty packed with information, shares relevant context that’s useful outside of these specific questions (like how to value economic outcomes), and also has lots of links to other interesting readings.
I also really appreciate the cross-organization collaboration that happened! It would be nice to see more occasions where representatives of different approaches or viewpoints come together like this.
Givewell uses moral weights to compare different units (eg. doubling incomes vs. saving an under-5′s life). These are 60% based on donor surveys, 30% from a 2019 survey of 2K people in Kenya and Ghana, and 10% staff opinion. [Note from Lizka: this is largely to create an exchange rate between different types of good outcomes.]
Open Philanthropy’s global health and wellbeing team uses the unit of ‘a single dollar to someone making 50K per year’ and then compares everything to that. Eg. Averting a DALY is worth 100K of these units.
Happier Lives Institute focuses on wellbeing, measuring WELLBYs. One WELLBY is a one-point increase on a 0-10 life satisfaction scale for one year.
Founder’s pledge values cash at $199 per WELLBY. They have conversion rates from WELLBYs to Income Doublings to Deaths Avoided to DALYs Avoided, using work from some of the orgs above. This means they can get a dollar figure they’re willing to spend for each of these measures.
Innovations for Poverty Action asks different questions depending on the project stage (eg. idea, pilot, measuring, scaling). Early questions can be eg. if it’s the right solution for the audience, and only down the line can you ask ’does it actually save more lives?
I also just really appreciate this quote from the post:
So what do we do? Well, we try to reduce everything to common units and by doing that we can more effectively compare across these different types of opportunities. But this is really, really hard! I can’t emphasise enough how difficult this is and we definitely don’t endorse all of the assumptions that we make. They’re a simplifying tool, they’re a model. All models are wrong, but some are useful, and there is constant room for improvement.
I’m curating this — thanks so much for putting it together, all! [1]
I think people are pretty confused about how prioritization or measurement of “good” can possibly happen, and how it happens in EA (and honestly, it’s hard to think about prioritization because prioritization is incredibly (emotionally) difficult), and even more confused about the differences between different approaches to prioritization — which means this is a really useful addition to that conversation.
I do wish there were a summary, though. I’ve copy-pasted Zoe’s summary below.
The post is pretty packed with information, shares relevant context that’s useful outside of these specific questions (like how to value economic outcomes), and also has lots of links to other interesting readings.
I also really appreciate the cross-organization collaboration that happened! It would be nice to see more occasions where representatives of different approaches or viewpoints come together like this.
Here’s the summary from Zoe’s incredible series (slightly modified):
I also just really appreciate this quote from the post:
[Disclaimer: written quickly, about a post I read some time ago that I just skimmed to remind myself of relevant details.]