You’re right that metric conversions are of interest to some orgs; for instance GiveWell and HLI both use moral weights to convert between averting death and increasing income. Other orgs don’t; for instance TLYCS looks at 4 core outcomes (lives saved, life-years added, income gained, carbon removed) and maintain them separately, and Open Phil have their “worldview buckets”. I lean towards converting metrics mostly for the reasons Nuno writes about, but I’m also swayed by Holden’s argument that cluster thinking (a main driver of worldview diversification) is more robust w.r.t. handling Knightian uncertainty, so I’m left unsure which approach (“to convert or not to convert?”) is best for EA as a whole.
No worries (:
You’re right that metric conversions are of interest to some orgs; for instance GiveWell and HLI both use moral weights to convert between averting death and increasing income. Other orgs don’t; for instance TLYCS looks at 4 core outcomes (lives saved, life-years added, income gained, carbon removed) and maintain them separately, and Open Phil have their “worldview buckets”. I lean towards converting metrics mostly for the reasons Nuno writes about, but I’m also swayed by Holden’s argument that cluster thinking (a main driver of worldview diversification) is more robust w.r.t. handling Knightian uncertainty, so I’m left unsure which approach (“to convert or not to convert?”) is best for EA as a whole.
Interesting stuff and out of my depth! Seems like something I should nerd out on for awhile :) Anywhere you suggest I could start?