Discounting the future consequences of welfare producing actions:
there’s almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future.
however many systems in the world are chaotic, and it’s very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces.
is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long term consequences of one’s actions?
Discounting the future consequences of welfare producing actions:
there’s almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future.
however many systems in the world are chaotic, and it’s very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces.
is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long term consequences of one’s actions?