Thanks for this! The reason i brought up interventions that they would want to fund is that I figured that they were interested in improving the WELLBY metric. If they are planning on being a regranter, then thats a whole different story to me.
I agree that they might very well be incommensurable. However, I suspect that different organizations will want to use different metrics, and someone like OpenPhil or one day GiveWell might have to be able to compare the two somehow. d
You’re right that metric conversions are of interest to some orgs; for instance GiveWell and HLI both use moral weights to convert between averting death and increasing income. Other orgs don’t; for instance TLYCS looks at 4 core outcomes (lives saved, life-years added, income gained, carbon removed) and maintain them separately, and Open Phil have their “worldview buckets”. I lean towards converting metrics mostly for the reasons Nuno writes about, but I’m also swayed by Holden’s argument that cluster thinking (a main driver of worldview diversification) is more robust w.r.t. handling Knightian uncertainty, so I’m left unsure which approach (“to convert or not to convert?”) is best for EA as a whole.
Thanks for this!
The reason i brought up interventions that they would want to fund is that I figured that they were interested in improving the WELLBY metric. If they are planning on being a regranter, then thats a whole different story to me.
I agree that they might very well be incommensurable. However, I suspect that different organizations will want to use different metrics, and someone like OpenPhil or one day GiveWell might have to be able to compare the two somehow. d
No worries (:
You’re right that metric conversions are of interest to some orgs; for instance GiveWell and HLI both use moral weights to convert between averting death and increasing income. Other orgs don’t; for instance TLYCS looks at 4 core outcomes (lives saved, life-years added, income gained, carbon removed) and maintain them separately, and Open Phil have their “worldview buckets”. I lean towards converting metrics mostly for the reasons Nuno writes about, but I’m also swayed by Holden’s argument that cluster thinking (a main driver of worldview diversification) is more robust w.r.t. handling Knightian uncertainty, so I’m left unsure which approach (“to convert or not to convert?”) is best for EA as a whole.
Interesting stuff and out of my depth! Seems like something I should nerd out on for awhile :) Anywhere you suggest I could start?