Currently doing local AI safety Movement Building in Australia and NZ.
Chris Leong
“So there’s not an analogous situation to help other people understand this from an animal advocate’s perspective, but to put it mildly, when other people eat animals at EA events, it feels as if some people at that event gathered in a circle and began writing hate articles against the Centre for Effective Altruism or cutting up malaria nets that the Against Malaria Foundation was planning to distribute. It feels like a slap in the face to our work, and worse, like a dismissal of the plight of the billions of suffering farmed animals.”
I think that it is important for EA to be a pluralistic movement. That is, there may be some EAs who are completely onboard with animal rights and global poverty reduction, but think existential risk due to AI is silly. There may be some EAs who think that the figures are such that existential risk outweighs every other cause and that the people working on other causes are silly. I don’t see this as problematic. On one hand, I can see that having all events being vegetarian makes it easier to recruit animal rights activists into the movement. On the other hand, it can create a situation where non-vegetarians feel out of place, which could be very bad if it reduces the eventual size of the movement.
I agree that social networks are very important for allowing these kind of groups to grow. I’m sure there have been small groups around the world dedicated towards doing maximal good, but without the Internet it is very hard for a coherent movement to form.
Why are you paying full price, instead of trying to buy them at a discount? Or has that changed? Like let’s suppose that each project you fund becomes 10% more likely because of the certificate scheme. In this situation, you should be trying to buy a certificate for the entire project at 10% of the altruistic value.
Thanks for posting this. I found it useful since I don’t follow the blog, but am still interested in some of the content.
Thanks for posting this. It is very easy to come off as arrogant when promoting Effective Altruism, but this article helps by providing specific examples of what you can reasonably say.
I’m very interested in seeing how your project goes. I suspect that it will take more time to build trust and credibility with the EA community, but I wish you the best of luck for this.
You don’t have to actually read the entire article. It is perfectly fine it you decide to upvote an article after the first paragraph or two. If it is high quality, I would suspect that enough people will upvote the article that the one person who can’t be bothered clicking on it doesn’t matter. Furthermore, if it isn’t worth your time to click and scroll to upvote, then upvoting that article can’t be particularly important.
I agree with Bernadetter Young that these kinds of discussions have the potential to harm the movement in terms of public relations, but I’m also principally committed to free speech as it is important for our assumptions to be challenged. I think renaming this topic is a good start on the PR front. I think it is important to realise that in the longer term increases in development will most likely lead to improvements in animal rights as the rich have more time to think about these things.
Population ethics: In favour of total utilitarianism over average
What do you mean by maximising average utility for existing people only? As you’ve noted, with a fixed number of people average and total utilitarianism are identical. It is only when we consider whether we should create (or destroy!) people that average and total utilitarianism come into play.
I’ve argued that if someone gets positive utility, then the universe is better when they exist. If I wanted to reduce this argument to a slogan, it would be “Good things are good”. As soon as it is accepted that average utilitarianism is flawed, most of the incentive to try to optimise things other than total utility go away. There exist a large number of strange utility functions, but the arguments for these seem to be rather unpersuasive.
Also, which parts in particular were hard to understand?
You aren’t harmed by not being brought into existence, but there is an opportunity cost, that is, if you would have lived a life worth living, that utility is lost.
If it is good for someone to experience a life worth living, then surely we would want as many people as possible to experience this.
Average utilitarianism vs total utilitarianism isn’t minutiae, there’s actually a pretty massive difference in the entire way we think about morality between those two systems.
“Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: “It is good for a SEP with positive utility to occur if it doesn’t affect anything else...” is the part you want. I probably should have formalised this a bit more.
Also, if you follow the link to Less Wrong, I give a seperate and more formal argument in the second section. I removed that argument because I decided that, while convincing, the argument I gave had no philosophical advantages over (a more formulised version) if the argument that I did give on this page.
I’ll give you one example where it makes a difference. Take for example factory farming—if we care about average utility, then it is clearly bad as the conditions are massively pulling down the average. If we care about total utility, then it is possible that the animals may have a small, but positive utility, and that less animals would exist if not for factory farming, so it’s existence might work out as a positive.
Re: other questions. I’ll probably rewrite and repost a more refined version of my argument at some point, but that is work for another day.
Even if people don’t believe that morality exists, a perfectly rational agent would still have consistent preferences. That said, there is an argument for Epistemic learned helplessness (made by Scott Alexander on his old blog), http://squid314.livejournal.com/350090.html?page= ; that is not always updating according to logic.
That’s interesting, I’ve never really thought about temporality, but I don’t see any reason why a future person would be valued less.
That said, I see critical level utilitarianism flawed for very similar reasons. I’ll probably write about it some time.
I think the argument for that decision not being “sadistic”, but the least bad option is reasonable if he can win the object level argument.
However, his explanation of why a lot of people living lives worth living is bad is flawed as he constructs people with a life barely worth living, then appeals to their status as an underclass to encourage us to emotively push this below the life worth living line. Unfortunately, any underclass status needs to be included in the utility calculation when it is determined whether or not a life is worth living.
Upvoted as these are interested ideas, particularly the humane pesticide. On the other hand, the happy animal farm idea is too far outside the mainstream and would be too damaging to the reputation of EA for me to want it to get too much support from the EA movement at the current time.
I would suggest 1) the happy animal farm idea shouldn’t be explored too much until at least EA has established itself 2) if these ideas are explored, they should be framed more as philosophical discussion than as a solid policy proposal 3) if in the future, someone decides to pursue this, it shouldn’t be directly supported by mainstream EA organisations. Even AI safety research is enough to make people skeptical of EA.
I can see a lot of value of having EA concepts promoted separately from the discussion of Effective Altruism. Not everyone is going to become an EA, in fact, a surprising number of people seem to be turned off by the movement and so EA material is unlikely to reach them effectively. Having non-EA materials promoting the same ideas means that 1) they may still develop the attributes that EA wants to instil 2) some people may become more inclined towards EA after they have accepted some of its values.
I would argue that increasing the size of the movement is more important than trying to enforce individual morality. If Effective Altruists are truly effective then each additional EA is worth more than having a large number of EAs stopping eating meat at EA events.