Has anyone reframed priorities choices (such as x-risk vs. poverty) as losses to check if they’re really biased?
I’ve read a little bit about the possibility that preferences for poverty reduction/global health/animal welfare causes over x-risk reduction may be due to some kind of ambiguity-aversion bias. Between donating U$3,000 for (A) saving a life (high certainty, presently) or (B) potentially saving 10^20 future lives (I know this may be a conservative guess, but it’s the reasoning that is important here, not the numbers), by making something like a marginal 10^-5 contribution to reducing in 10^-5 some extinction risk, people would prefer the first “safe” option A, despite the large pay-off of the second one. However, such bias is sensitive to framing effects: people usually prefer sure gains (like A) and uncertain losses (like B’). So, I was trying to find out, without success, if anyone had reframed this decision as matter of losses, to see if one prefers, e.g., (A’) reducing deaths by malaria from 478,001 to 478,000 or (B’) reducing the odds of extinction (minus 10^20 lives) in 10^-10.
Perhaps there’s a better way to reframe this choice, but I’m not interested in discussing one particular example (however, I’m concerned with the possibility that there’s no bias-free way of framing it). My point is that, if one chooses something like A-B’, then we have a strong case for the existence of a bias.
(I’m well aware of other objections against x-risk causes, such as Pascal’s mugging and discount rates arguments – but I think they’ve received due attention, and should be discussed separately. Also, I’m mostly thinking about donation choices, not about policy or career decisions, which is a completely different matter; however, IF this experiment confirmed the existence of such a bias, it could influence the latter, too.
I’m new here. Since I suspect someone has probably already made a similar question somewhere else—but I couldn’t find it, so sorry bothering you—I’m mostly trying to satisfy my curiosity; however, there’s a small probability that it touches an important unsolved dilemma about global priorities—the x-risk vs. safe causes. I’m not looking for karma—though you can’t have too much of it, right?)
Perhaps I should warn: ambiguity-aversion sensitivity to framing effects is contested by Voorhoeve et al. (philarchive.org/archive/VOOAAF); however, the authors recognize their conclusion goes against most of the literature.
Has anyone reframed priorities choices (such as x-risk vs. poverty) as losses to check if they’re really biased?
I’ve read a little bit about the possibility that preferences for poverty reduction/global health/animal welfare causes over x-risk reduction may be due to some kind of ambiguity-aversion bias. Between donating U$3,000 for (A) saving a life (high certainty, presently) or (B) potentially saving 10^20 future lives (I know this may be a conservative guess, but it’s the reasoning that is important here, not the numbers), by making something like a marginal 10^-5 contribution to reducing in 10^-5 some extinction risk, people would prefer the first “safe” option A, despite the large pay-off of the second one. However, such bias is sensitive to framing effects: people usually prefer sure gains (like A) and uncertain losses (like B’). So, I was trying to find out, without success, if anyone had reframed this decision as matter of losses, to see if one prefers, e.g., (A’) reducing deaths by malaria from 478,001 to 478,000 or (B’) reducing the odds of extinction (minus 10^20 lives) in 10^-10.
Perhaps there’s a better way to reframe this choice, but I’m not interested in discussing one particular example (however, I’m concerned with the possibility that there’s no bias-free way of framing it). My point is that, if one chooses something like A-B’, then we have a strong case for the existence of a bias.
(I’m well aware of other objections against x-risk causes, such as Pascal’s mugging and discount rates arguments – but I think they’ve received due attention, and should be discussed separately. Also, I’m mostly thinking about donation choices, not about policy or career decisions, which is a completely different matter; however, IF this experiment confirmed the existence of such a bias, it could influence the latter, too.
I’m new here. Since I suspect someone has probably already made a similar question somewhere else—but I couldn’t find it, so sorry bothering you—I’m mostly trying to satisfy my curiosity; however, there’s a small probability that it touches an important unsolved dilemma about global priorities—the x-risk vs. safe causes. I’m not looking for karma—though you can’t have too much of it, right?)
Perhaps I should warn: ambiguity-aversion sensitivity to framing effects is contested by Voorhoeve et al. (philarchive.org/archive/VOOAAF); however, the authors recognize their conclusion goes against most of the literature.