I’d be interested in an elaboration on why you reject expected value calculations.
My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if—as is true much of the time in career choice—I can look at data on the average person’s chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal’s muggings.
That’s a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don’t think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they’d be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.
EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions, such as public health interventions. It’s sometimes useful when thinking about careers. But I used to feel compelled to pursue careers that I hated and probably wouldn’t be good at, just on the off chance it would work. Now I see morality as being more closely tied to what I find meaning in (again, anti-realism). And I don’t find meaning in saving a trillion EV lives or whatever.
I’d be interested in an elaboration on why you reject expected value calculations.
My personal feeling is that expected-value calculations with very small probabilities are unlikely to be helpful, because my calibration for these probabilities is very poor: a one in ten million chance feels identical to a one in ten billion chance for me, even though their expected-value implications are very different. But I expect to be better-calibrated on the difference between a one in ten chance and a one in a hundred chance, particularly if—as is true much of the time in career choice—I can look at data on the average person’s chance of success in a particular career. So I think that high-risk high-reward careers are quite different from Pascal’s muggings.
Can you explain why (and whether) you disagree?
That’s a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don’t think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they’d be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.
EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions, such as public health interventions. It’s sometimes useful when thinking about careers. But I used to feel compelled to pursue careers that I hated and probably wouldn’t be good at, just on the off chance it would work. Now I see morality as being more closely tied to what I find meaning in (again, anti-realism). And I don’t find meaning in saving a trillion EV lives or whatever.
Model uncertainty drastically increases in the tails is how I think about it.