Would you be interested in having a section on the website that is basically “Ways to be an EA while not being a utilitarian?” I say this as someone who is very committed to EA but very against utilitarianism. Fair enough if the answer is no, but if the answer is yes, I’d be happy to help out with drafting the section.
Nitpick: This quote here seems wrong/misleading: “What matters most for utilitarianism is bringing about the best consequences for the world. This involves improving the wellbeing of all individuals, regardless of their gender, race, species, and their geographical or temporal location.”
What do you mean by “this involves?” If you mean “This always involves” it is obviously false. If you mean “this typically involves” then it might be true, but I am pretty sure I could convince you it is false also. For example, very often more utility will be created if you abandon some people—even some entire groups of people—as lost causes and focus on creating more happy people instead. Most importantly, if you mean “for us today, it typically involves” it is also false, because creating a hedonium shockwave dramatically decreases the wellbeing of most individuals on Earth, at least for a short period before they die. :P
(You may be able to tell from the above some of the reasons why I think utilitarianism is wrong!)
Would you be interested in having a section on the website that is basically “Ways to be an EA while not being a utilitarian?” I say this as someone who is very committed to EA but very against utilitarianism. Fair enough if the answer is no, but if the answer is yes, I’d be happy to help out with drafting the section.
On the nitpick, I agree that the wording is misleading. Bringing people into existence is not usually understood to “improve their welfare”, since someone who doesn’t exist has no welfare (not even welfare 0). It’s probably better to say “benefit”, although it’s also a question for philosophy whether you can benefit someone by bringing them into existence.
Also, even “improve” isn’t quite right to me if we’re being person-affecting, since it suggests their welfare will be higher than before, but we only mean higher than otherwise.
For example, very often more utility will be created if you abandon some people—even some entire groups of people—as lost causes and focus on creating more happy people instead.
A hedonium shockwave can involve a lot of killing, as you suggest.
Would you be interested in having a section on the website that is basically “Ways to be an EA while not being a utilitarian?” I say this as someone who is very committed to EA but very against utilitarianism. Fair enough if the answer is no, but if the answer is yes, I’d be happy to help out with drafting the section.
Nitpick: This quote here seems wrong/misleading: “What matters most for utilitarianism is bringing about the best consequences for the world. This involves improving the wellbeing of all individuals, regardless of their gender, race, species, and their geographical or temporal location.”
What do you mean by “this involves?” If you mean “This always involves” it is obviously false. If you mean “this typically involves” then it might be true, but I am pretty sure I could convince you it is false also. For example, very often more utility will be created if you abandon some people—even some entire groups of people—as lost causes and focus on creating more happy people instead. Most importantly, if you mean “for us today, it typically involves” it is also false, because creating a hedonium shockwave dramatically decreases the wellbeing of most individuals on Earth, at least for a short period before they die. :P
(You may be able to tell from the above some of the reasons why I think utilitarianism is wrong!)
I think that would be better on another website, one specifically dedicated to EA and not utilitarianism. Possibly Utilitarianism.net could link to it. Maybe an article for https://www.effectivealtruism.org/ ?
On the nitpick, I agree that the wording is misleading. Bringing people into existence is not usually understood to “improve their welfare”, since someone who doesn’t exist has no welfare (not even welfare 0). It’s probably better to say “benefit”, although it’s also a question for philosophy whether you can benefit someone by bringing them into existence.
Also, even “improve” isn’t quite right to me if we’re being person-affecting, since it suggests their welfare will be higher than before, but we only mean higher than otherwise.
Anyhow, thanks for the consideration. Yeah, maybe I’ll write a blog post on the subject someday.
This might be relevant:
“The world destruction argument” by Simon Knutsson, with appendix here.
My nitpick was not about the nonexistence stuff, it was about hurting and killing people.
I had this in mind:
A hedonium shockwave can involve a lot of killing, as you suggest.