I think that personally, I’d mostly advocate for attempts to decouple motivation from total impact magnitude, rather for attempts to argue that high impact magnitude is achievable, so to speak, when trying to improve motivation.
If you attach your motivations to a specific magnitude of “$2,000 per life saved”, then you can expect them to fluctuate heavily when estimates change. But ideally, you would want your motivations to stay pretty optimal and thus consistent for your goals. I think this ideal is somewhat possible and can be worked towards.
The main goal of a consequentialist should be to optimize a utility function, it really shouldn’t matter what the specific magnitudes are. If the greatest thing I could do with my life is to keep a small room clean, then I should spend my greatest effort on that thing (my own wellbeing aside).
I think that most people aren’t initially comfortable with re-calibrating their goals for arbitrary utility function magnitudes, but I think that learning to do so is a gradual process that could be learned, similar to learning stoic philosophy.
It’s similar to learning how to be content no matter one’s conditions (aside from extreme physical ones), as discussed in The Myth of Sisyphus.
I think that personally, I’d mostly advocate for attempts to decouple motivation from total impact magnitude, rather for attempts to argue that high impact magnitude is achievable, so to speak, when trying to improve motivation.
If you attach your motivations to a specific magnitude of “$2,000 per life saved”, then you can expect them to fluctuate heavily when estimates change. But ideally, you would want your motivations to stay pretty optimal and thus consistent for your goals. I think this ideal is somewhat possible and can be worked towards.
The main goal of a consequentialist should be to optimize a utility function, it really shouldn’t matter what the specific magnitudes are. If the greatest thing I could do with my life is to keep a small room clean, then I should spend my greatest effort on that thing (my own wellbeing aside).
I think that most people aren’t initially comfortable with re-calibrating their goals for arbitrary utility function magnitudes, but I think that learning to do so is a gradual process that could be learned, similar to learning stoic philosophy.
It’s similar to learning how to be content no matter one’s conditions (aside from extreme physical ones), as discussed in The Myth of Sisyphus.
https://en.wikipedia.org/wiki/The_Myth_of_Sisyphus