Similar issues come up in poker—if you bet everything you have on one bet, you tend to lose everything too fast, even if that one bet considered alone was positive EV.
I think you have to consider expected value an approximation. There is some real, ideal morality out there, and we imperfect people have not found it yet. But, like Newtonian physics, we have a pretty good approximation. Expected value of utility.
Yeah, in thought experiments with 10^52 things, it sometimes seems to break down. Just like Newtonian physics breaks down when analyzing a black hole. Nevertheless, expected value is the best tool we have for analyzing moral outcomes.
Maybe we want to be maximizing log(x) heee, or maybe that’s just an epicycle and someone will figure out a better moral theory. Either way, the logical principle that a human life in ten years shouldn’t be worth less than a human life today seems like a plausible foundational principle.
Nevertheless, expected value is the best tool we have for analyzing moral outcomes
Expected value is only one parameter of the (consequentialist) evaluation of an action. There are more, e.g. risk minimisation.
It would be a massive understatement to say that not all philosophical or ethical theories so far boil down to “maximise the expected value of your actions”.
Similar issues come up in poker—if you bet everything you have on one bet, you tend to lose everything too fast, even if that one bet considered alone was positive EV.
I think you have to consider expected value an approximation. There is some real, ideal morality out there, and we imperfect people have not found it yet. But, like Newtonian physics, we have a pretty good approximation. Expected value of utility.
Yeah, in thought experiments with 10^52 things, it sometimes seems to break down. Just like Newtonian physics breaks down when analyzing a black hole. Nevertheless, expected value is the best tool we have for analyzing moral outcomes.
Maybe we want to be maximizing log(x) heee, or maybe that’s just an epicycle and someone will figure out a better moral theory. Either way, the logical principle that a human life in ten years shouldn’t be worth less than a human life today seems like a plausible foundational principle.
Expected value is only one parameter of the (consequentialist) evaluation of an action. There are more, e.g. risk minimisation.
It would be a massive understatement to say that not all philosophical or ethical theories so far boil down to “maximise the expected value of your actions”.