Given the recent post on the marketability of EA (which went so far as to suggest excluding MIRI from the EA tent to make EA more marketable—or maybe that was a comment in response to the post; don’t remember), a brief reaction from someone who is excited about Effective Altruism but has various reservations. (My main reservation, so you have a feel for where I’m coming from, is that my goal in life is not to maximize the world’s utility, but, roughly speaking, to maximize my own utility and end-of-life satisfaction, and therefore I find it hard to get excited about theoretically utility maximizing causes rather than donating to things which I viscerally care about—I know this will strike most people here as incredibly squishy, but I’d bet that much of the public outside the EA community probably has a similar reaction, though few would actually come out and say it)
I like your idea about high-leverage values spreading.
The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody. If it gains widespread popularity among the EA movement, I will move from being intrigued about EA and excited to bring it up in conversation to never raising it in conversations with all but the most open-minded / rationalist people, or raising it in the tone of “yeah these guys are crazy but this one idea they have about applying data analysis to charity has some merit… Earning to Give is intriguing too....” I could be wrong, but I’d strongly suspect that most people who are not thoroughgoing utilitarians find it incredibly silly to argue that creating more beings who experience utility is a good cause, and this would quickly push EA away from being taken seriously in the mainstream.
The humane insecticides idea doesn’t seem AS silly as those two above, but it places EA in the same mental category as my most extreme caricature of PETA (and I’m someone who eats mostly vegan, and some Certified Humane, because I care about animal welfare). I don’t think insects are a very popular cause.
The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody.
Yeah I definitely understand that reaction which is why I was not sure it was a good idea to post this. It looks like it probably wasn’t. Thanks for the feedback.
Please don’t be discouraged! I very much appreciate this post.
I am a negative-leaning hedonistic utilitarian who has thought a little about the effectiveness of intense pleasure production. Like you, I estimate that donating to MIRI, ACE, and its recommended charities is more efficient than a wireheading operation at this time.
That being said, I wish more people would at least consider the possibility that wireheading is altruistic. If relative costs change in the future, it may prove to be effective. Unfortunately, the concept of intense pleasure production causes most people, even many EAs, to recoil in disgust.
I would enjoy discussing cost estimates of intense pleasure production/wireheading more with you, please send me a message if you’re interested. :)
You don’t have to be concerned about somewhat outré ideas (more outré than AI risk I guess) becoming popular among EAs since their tractability – how easily someone can gain widespread support for scaling them up – will necessarily be very limited. That will make them vastly inferior to causes for whose importance there is such widespread support. There may be exceptions to this rule, but I think by and large it holds.
I think there are also a lot of non-selfish reasons for not wanting to breed a load of rats and protect insects that even entemologists think don’t have a concept of suffering / pain that’s in any way equivalent to what we consider morally valuable.
Great to know—can you point me to an entomologist that thinks, or a paper that argues (that isn’t philosophy), that insects have suffering that is in any way equivalent to what we would understand it as please?
Sorry, using suffering losely. The quality of suffering largely determines its value in my eyes. I ve seen entemologists argue there is no possible way insects can feel suffering. I don’t necessarily go along with that, we deny suffering at every opportunity: Black people in apartheid denied pain killers, animals thought mot to feel pain, fish, mentally illetc etc but really, a little system of chemicals resembling something simpler than electronic systems we’ve built? The point in trying to make is that this seems like a rabbit hole. Get out of it and wake up to what really matters. There are litrrally millions of things anyone can be getting on with that are more pressing than the imaginary plight of insects.
I don’t understand what’s off-putting about optimizing far-future outcomes. This is a good sketch of what we are talking about: http://www.abolitionist.com/
But apparently even people who call themselves “effective altruists” would rather downvote than engage in rational discussion.
Given the recent post on the marketability of EA (which went so far as to suggest excluding MIRI from the EA tent to make EA more marketable—or maybe that was a comment in response to the post; don’t remember), a brief reaction from someone who is excited about Effective Altruism but has various reservations. (My main reservation, so you have a feel for where I’m coming from, is that my goal in life is not to maximize the world’s utility, but, roughly speaking, to maximize my own utility and end-of-life satisfaction, and therefore I find it hard to get excited about theoretically utility maximizing causes rather than donating to things which I viscerally care about—I know this will strike most people here as incredibly squishy, but I’d bet that much of the public outside the EA community probably has a similar reaction, though few would actually come out and say it)
I like your idea about high-leverage values spreading.
The ideas about Happy Animal Farm / Promoting Universal Eudaimonia seem nuts to me, so much so that I actually reread the post to see if this was a parody. If it gains widespread popularity among the EA movement, I will move from being intrigued about EA and excited to bring it up in conversation to never raising it in conversations with all but the most open-minded / rationalist people, or raising it in the tone of “yeah these guys are crazy but this one idea they have about applying data analysis to charity has some merit… Earning to Give is intriguing too....” I could be wrong, but I’d strongly suspect that most people who are not thoroughgoing utilitarians find it incredibly silly to argue that creating more beings who experience utility is a good cause, and this would quickly push EA away from being taken seriously in the mainstream.
The humane insecticides idea doesn’t seem AS silly as those two above, but it places EA in the same mental category as my most extreme caricature of PETA (and I’m someone who eats mostly vegan, and some Certified Humane, because I care about animal welfare). I don’t think insects are a very popular cause.
Just my 2 cents.
Yeah I definitely understand that reaction which is why I was not sure it was a good idea to post this. It looks like it probably wasn’t. Thanks for the feedback.
Please don’t be discouraged! I very much appreciate this post.
I am a negative-leaning hedonistic utilitarian who has thought a little about the effectiveness of intense pleasure production. Like you, I estimate that donating to MIRI, ACE, and its recommended charities is more efficient than a wireheading operation at this time.
That being said, I wish more people would at least consider the possibility that wireheading is altruistic. If relative costs change in the future, it may prove to be effective. Unfortunately, the concept of intense pleasure production causes most people, even many EAs, to recoil in disgust.
I would enjoy discussing cost estimates of intense pleasure production/wireheading more with you, please send me a message if you’re interested. :)
You don’t have to be concerned about somewhat outré ideas (more outré than AI risk I guess) becoming popular among EAs since their tractability – how easily someone can gain widespread support for scaling them up – will necessarily be very limited. That will make them vastly inferior to causes for whose importance there is such widespread support. There may be exceptions to this rule, but I think by and large it holds.
I think there are also a lot of non-selfish reasons for not wanting to breed a load of rats and protect insects that even entemologists think don’t have a concept of suffering / pain that’s in any way equivalent to what we consider morally valuable.
Not all entomologists think that insects don’t have suffering or pain.
Great to know—can you point me to an entomologist that thinks, or a paper that argues (that isn’t philosophy), that insects have suffering that is in any way equivalent to what we would understand it as please?
Concept of suffering != experience of suffering.
Human babies don’t have such concepts either, but experience of suffering is still realistic.
Reading Tom charitably, I’m not sure he meant to talk about whether insects themselves have an idea of suffering?
Sorry, using suffering losely. The quality of suffering largely determines its value in my eyes. I ve seen entemologists argue there is no possible way insects can feel suffering. I don’t necessarily go along with that, we deny suffering at every opportunity: Black people in apartheid denied pain killers, animals thought mot to feel pain, fish, mentally illetc etc but really, a little system of chemicals resembling something simpler than electronic systems we’ve built? The point in trying to make is that this seems like a rabbit hole. Get out of it and wake up to what really matters. There are litrrally millions of things anyone can be getting on with that are more pressing than the imaginary plight of insects.
I don’t understand what’s off-putting about optimizing far-future outcomes. This is a good sketch of what we are talking about: http://www.abolitionist.com/
But apparently even people who call themselves “effective altruists” would rather downvote than engage in rational discussion.
FYI I believe you’re getting downvoted because your second paragraph comes across as mean-spirited.