No. Utilitarianism claims that you’re morally obligated to take the action that does the most to increase wellbeing, as understood according to the hedonic view.
Our definition shares an emphasis on wellbeing and impartiality, but we depart from utilitarianism in that:
We don’t make strong claims about what’s morally obligated. Mainly, we believe that helping more people is better than helping fewer. If we were to make a claim about what we ought to do, it would be that we should help others when we can benefit them a lot with little cost to ourselves, which is much weaker than utilitarianism.
Our view is compatible with also putting weight on other notions of wellbeing, other moral values (e.g. autonomy), and other moral principles. In particular, we don’t endorse harming others for the greater good.
We’re very uncertain about the correct moral theory and try to put weight on multiple perspectives.
Overall, many members of our team don’t identify as being straightforward utilitarians or consequentialists.
Our main position isn’t that people should be more utilitarian, but that they should pay more attention to consequences than they do — and especially to the large differences in the scale of the consequences of different actions.
If one career path might save hundreds of lives, and another won’t, we should all be able to agree that matters.
In short, we think ethics should be more sensitive to scope.
In 80K Hours’ What is social impact? A definition, under the subheading “Is this just utilitarianism?”, Ben Todd wrote (bolded parts mine):
So this mirrors Ben’s comment above.
I’m personally quite glad to see this made explicit in an introductory high-traffic article like this.