Maximizing expected value can have counterintuitive implications like prioritizing insects over humans or pursuing astronomical payoffs with tiny probabilities.
no. that’s an argument about which entities you choose to consider. rational expected value calculus is to care about the smallest set of people that includes yourself. or, more specifically, for a gene to care about itself only.
Alternatives like contractualism and various forms of risk aversion may better align with moral intuitions.
“risk aversion” is just decreasing marginal utility. e.g. if you take a guarantee of a million dollars over a 50% shot at 3 million. u = log2(wealth), so this is an expected utility calculation of:
100% of 1M = 19.93 vs. 50% of 3M = 21.517/2 = 10.7585
thus the guarantee of 1M obviously makes sense unless you’re already quite wealthy.
Practical decision-making requires wrestling with moral and empirical uncertainties.
what is “moral” uncertainty? morality is just “genes maximizing their expected number of copies made”.
the idea that there’s some viable alternative to expected utility maximization is just thoroughly refuted by everything we know about decision making.
http://www.rangevoting.org/UtilFoundns
http://www.rangevoting.org/Mill
http://www.rangevoting.org/OmoUtil.html
no. that’s an argument about which entities you choose to consider. rational expected value calculus is to care about the smallest set of people that includes yourself. or, more specifically, for a gene to care about itself only.
“risk aversion” is just decreasing marginal utility. e.g. if you take a guarantee of a million dollars over a 50% shot at 3 million. u = log2(wealth), so this is an expected utility calculation of:
100% of 1M = 19.93
vs.
50% of 3M = 21.517/2 = 10.7585
thus the guarantee of 1M obviously makes sense unless you’re already quite wealthy.
what is “moral” uncertainty? morality is just “genes maximizing their expected number of copies made”.