Ok, thanks for expanding upon your view! It sounds broadly akin to how I’m inclined to address Pascal’s Mugging cases (treat the astronomical stakes as implying proportionately negligible probability). Astronomical stakes from x-risk mitigation seems much more substantively credible to me, but I don’t have much to add at this point if you don’t share that substantive judgment!
It sounds broadly akin to how I’m inclined to address Pascal’s Mugging cases (treat the astronomical stakes as implying proportionately negligible probability).
Makes sense. I see Pascal’s muggings as instances where the probability of the offers is assessed indepently of their outcomes. In contrast, for any distribution with a finite expected value, the expected value density (product between the PDF and value) always ends up decaying to 0 as the outcome increases. In meta-analyses, effect sizes, which can be EVs under a given model, are commonly weighted by the reciprocal of their variance. Variance tends to increase with effect size, and therefore larger effect sizes are usually weighted less heavily.
Ok, thanks for expanding upon your view! It sounds broadly akin to how I’m inclined to address Pascal’s Mugging cases (treat the astronomical stakes as implying proportionately negligible probability). Astronomical stakes from x-risk mitigation seems much more substantively credible to me, but I don’t have much to add at this point if you don’t share that substantive judgment!
You are welcome!
Makes sense. I see Pascal’s muggings as instances where the probability of the offers is assessed indepently of their outcomes. In contrast, for any distribution with a finite expected value, the expected value density (product between the PDF and value) always ends up decaying to 0 as the outcome increases. In meta-analyses, effect sizes, which can be EVs under a given model, are commonly weighted by the reciprocal of their variance. Variance tends to increase with effect size, and therefore larger effect sizes are usually weighted less heavily.
People sometimes point to Holden Karnofsky’s post Why we can’t take expected value estimates literally (even when they’re unbiased) to justify not relying on EVs (here are my notes on it from 4 years ago). However, the post does not broadly argue against using EVs. I see it as a call for not treating all EVs the same, and weighting them appropriately.