I think one relevant intuition people have is “there’s never a point at which I would not want to help any more people.” Like if one action helps N people (in expectation) and another helps N+1 people, I’d rather do the latter.
A conflicting intuition is that we don’t feel that much better about helping 10 billion people than helping 9 billion people. The essay “On caring” argues that it’s still really important to help those extra 1 billion people.
You may also be interested in the Von Neumann–Morgenstern utility theorem, which proves that all agents whose behavior obeys some reasonable properties will behave as maximizers.
As a side note, I don’t think you need to deeply care about maximization to care about EA; for example, you might feel fine about frequenting fancy restaurants. EA is not utilitarianism; the core idea of EA is increasing the quality of your altruism, not the quantity (although plenty of EAs feel inspired to increase the quantity as well, and some of these EAs are utilitarians).
Thanks so much for your comment. I just finished reading or skimming the links you brought up and I’m still thinking it over. For now I just wanted to say I appreciate taking the time to post this.
The ethical theory of utilitarianism essentially states that “we ought to act to improve the well-being of everyone by as much as possible,” which has a strong “do the most good” vibe. There are certainly a lot of arguments for and against utilitarianism-style ethics.
I think one relevant intuition people have is “there’s never a point at which I would not want to help any more people.” Like if one action helps N people (in expectation) and another helps N+1 people, I’d rather do the latter.
A conflicting intuition is that we don’t feel that much better about helping 10 billion people than helping 9 billion people. The essay “On caring” argues that it’s still really important to help those extra 1 billion people.
Another idea is that not maximizing (e.g. not saving an extra person’s life because you used that money to eat at a fancy restaurant) is the same as allowing harm to happen, and some philosophers believe that this is no different than doing harm.
You may also be interested in the Von Neumann–Morgenstern utility theorem, which proves that all agents whose behavior obeys some reasonable properties will behave as maximizers.
As a side note, I don’t think you need to deeply care about maximization to care about EA; for example, you might feel fine about frequenting fancy restaurants. EA is not utilitarianism; the core idea of EA is increasing the quality of your altruism, not the quantity (although plenty of EAs feel inspired to increase the quantity as well, and some of these EAs are utilitarians).
Thanks so much for your comment. I just finished reading or skimming the links you brought up and I’m still thinking it over. For now I just wanted to say I appreciate taking the time to post this.