Feel free to message me on here.
JackM
Thank you! Links in articles such as this just weren’t working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.
Anyone who cares about sentience-friendly intelligence should not harm our fellow subjects of experience. Shutting down factory farms and slaughterhouses will eliminate one of the world’s worst forms of severe and readily avoidable suffering.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isn’t a lot of money in the AW space. I’m pretty sure GHD has far better places to fundraise from.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, I’d imagine some people would make it their overwhelming mission to ensure we don’t (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesn’t seem too bad to me. I’m not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because there’s just no point doesn’t stack up to me.
I’m not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
‘argument is weak but some people intuitively believe it in part because they want it to be true’ → ‘there is no strong post that can really be written’ → ‘nobody posts it’
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
I’m surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just “not counting”—I’ve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesn’t seem to have provided any justification (from what I’ve seen) for the claim that animals don’t have relevant experiences that make them moral patients. He simply asserts this as his view. It’s not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I don’t think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didn’t impress me:
I wish I remembered this better, but he made some sort of assertion that animals don’t have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if I’m misremembering anything here).
He didn’t respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.
Depends if there’s a better option. I agree with MichaelStJules when he says “’Meat eating problem’ seems likely to be understood too generally as the problem of animal consumption.” The other proposed options don’t seem that great to me because they seem to abstract too far away from the issue of saving lives which is at the core of the problem.
It’s worth noting there is a cost to changing the name of something. You’ll then have the exact same thing referred to by different names in different places which can lead to confusion. Also it’s very hard to get a whole community to change the way they refer to something that has been around for a while.
With regards to the “persuasion” point—I think the issue is that the problem we are talking about is inherently uncomfortable. We’re talking about how saving human lives may not be as good as we think it is because humans cause suffering to animals. This is naturally going to be hard for a lot of people to swallow the second you explain it to them, and I don’t think putting a nicer name on it is going to change that.
With regard to fairness…this is my personal view but this doesn’t bother me much. I don’t see evidence of individuals in lower income countries caring about the language we use on the EA Forum which is what would ultimately influence me on this point.
I’m aware I’m in the extreme minority here and I might be wrong. I fully expect to get further downvotes but if people disagree I would welcome pushback in the form of replies.
Accuracy: I don’t think the core problem actually the people who’s lives we are saving, its that they then eat meat and cause suffering. I think its important to separate the people from the core problem as this better helps us consider possible solutions
The main takeaway of the ‘meat eater problem’ (sorry!) is to reassess the cost-effectiveness of saving human lives, not necessarily to argue that we should focus on reducing animal consumption in lower-income countries. While reducing animal consumption is important, that’s not typically the central takeaway from this specific ‘problem’.
In this sense, the saving lives aspect is more central to the problem than the meat consumption aspect, though both are pivotal. So, in a purely logical sense, the term ‘meat eater problem’ might actually be more accurate.
Sorry that is fair, I think I assumed too much about your views.
What is your preferred moral theory out of interest?
When you say top 1 percent of animal-friendly-moral theory humans but maybe in the bottom third of EAs, is this just say hedonism but with moral weights that are far less animal-friendly than say RP’s?
Again I think it depends on what we mean by an animal-friendly moral theory or a pro-global health moral theory. I’d be surprised though if many people hold a pro-global health moral theory but still favor animal welfare over global health. But maybe I’m wrong.
I’m not sure what the scope of “similarly-animal-friendly theories” is in your mind. For me I suppose it’s most if not all consequentialist / aggregative theories that aren’t just blatantly speciesist. The key point is that the number of animals suffering (and that we can help) completely dwarfs the number of humans. Also, as MichaelStJules says, I’m pretty sure animals have desires and preferences that are being significantly obstructed by the conditions humans impose on them.
I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories. You seem to think that it’s because they are making this simple mistake with the multiplier argument, with your evidence being that loads of people are citing the RP moral weights project. I suppose I’m not sure which of us is correct, but I would point out that people may just find the moral weights project important because they have some significant credence in hedonism.
You preface this post as being an argument for Global Health, but it isn’t necessarily. As you say in the conclusion it is a call not to “focus on upside cases and discard downside ones as the Multiplier Argument pushes you to do”. For you this works in favor of global health, for others it may not. Anyone who think along the lines of “how can anyone even consider funding animal welfare over global health, animals cannot be fungible with humans!”, or similar, will have this argument pull them closer to the animal welfare camp.
In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
I take on board this is just a toy example, but I wonder how relevant it is. For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing animal welfare (most prominent of which is hedonism). I think this is primarily what drove animal welfare clearly beating global health in the voting. So the “50/50” in the toy example might be a bit misleading, but I would be interested in polling the EA community to understand their moral views.
You can counter this and say that people still aren’t factoring in that global health destroys animal welfare on pretty much any other moral view, but is this true? This needs more justification as per MichaelStJules’ comment.
Even if it is true, is it fair to say that non-hedonic moral theories favor global health over animal welfare to a greater extent than hedonism favors animal welfare over global health? That claim is essentially doing all the work in your toy example, but seems highly uncertain/questionable.
I think you may be departing from strong longtermism. The first proposition for ASL is “Every option that is near-best overall is near-best for the far future.” We are talking about making decisions whose outcome is one of the best things we can do for the far future. It’s not merely something that is better than something deemed terrible.
I think you have misunderstood this. An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome. For example, if we are at risk of entering a hellscape that will last for eternity and you can press a button to simply stop that from happening, that seems to me like it would be the single best thing anyone can do (overall or for the far future). The end result however would just be a continuation of the status quo. This is the concept of counterfactual impact—we compare the world after our intervention to the world that would have happened in the absence of the intervention and the difference in value is essentially how good the intervention was. Indeed a lot of longtermists simply want to avert s-risks (risks of astronomical suffering).
I don’t understand some of what you’re saying including on ambiguity. I don’t find it problematic to say that the US winning the race to superintelligence is better in expectation than China winning. China has authoritarian values, so if they control the world using superintelligence they are more likely to control it according to authoritarian values, which means less freedom, but freedom is important for wellbeing etc. etc. I think we can say, if we assume persistence, that future people would more likely be thankful the US won the race to superintelligence than China did. I am extrapolating that future people will also like freedom. Could I be wrong, sure, but we are doing things based on expectation.
I would say that your doubts about persistence are the best counter to longtermism. The claim that superintelligence may allow a state to control the world for a very long time is perhaps a more controversial one, but not one I am willing to discount. If you want to engage with object-level arguments on this point check out this document: Artificial General Intelligence and Lock-In.
Cells, atoms and neurons aren’t conscious entities in themselves. I see no principled reason for going to that level for an uninformed prior.
A true uninformed prior would probably say “I have no idea” but if we’re going to have some idea it seems more natural to start at all sentient individuals having equal weight. The individual is the level at which conscious experience happens, not the cell/atom/neuron.
I don’t think I have explained this well enough. I’d be happy to have a call sometime if you want as that might be more efficient than this back and forth. But I’ll reply for now.
b) These near future states that will endure for a long time will be the best states for the beings in the far future.
No. This is not what I’m saying.
The key thing is that there are two attractor states that differ in value and you can affect if you end up in one or the other. The better one does not have to be the best possible state of the world, it just has to be better than the other attractor state.
So if you achieve the better one you persist at that higher expected value for a very long time compared to the counterfactual of persisting at the lower value for a very long time. So even if the difference in value (at any given time) is kind of small, the fact that this difference persists for a very long time is what gives you the very large counterfactual impact.
That means that the state that is hypothesised to last for an extremely long time, is a state that is close to the present state.
Not necessarily. To use the superintelligence example, the world will look radically different under either the US or China having superintelligence than it does now.
For example, how exactly does the US winning the race to superintelligence lead to one of the best possible futures for quadrillions of people in the far future? How long is this state expected to last?
As I said earlier it doesn’t necessarily lead to one of the best futures, but to cover the persistence point—this is a potentially fair push back. Some people doubt the persistence of longtermist interventions/attractor states, which would then dampen the value of longtermist interventions. We can still debate the persistence of different states of the world though and many think that a government controlling superintelligence would become very powerful and so be able to persist for a long time (exactly how long I don’t know but “long time” is all we really need for it to become an important question).
What exactly is claimed to be persisting for a very long time? The US having dominance over the world? Liberty? Democracy? Wellbeing? And whatever it is, how is that influencing the quadrillions of lives in the far future, given that there is still a large subset of X which is changing.
Yeah I guess in this case I’m talking about the US having dominance over the world as opposed to China having dominance over the world. Remember I’m just saying one attractor state is better than the other in expectation, not that one of them is so great. I think it’s fair to say I’d rather the US control the world than China control the world given the different values the two countries hold. Leopold Aschenbrenner talks more about this here. Of course I can’t predict the future precisely, but we can talk about expectations.
You haven’t factored in the impact of saving a life on fertility. Check out this literature review which concludes the following (bold emphasis mine):
I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1 [meaning that averting a death rarely causes a net drop in population]. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an ideal family size in mind, and achieved in part by access to modern contraception. In those contexts, saving one child’s life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change.
Also you’re assuming neuron count should be used as proxies for moral weight but I’m highly skeptical that is fair (see this).
I think the question of prioritization of human welfare versus animal welfare should be approached from a “philosopher” mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.
There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don’t like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection).
I’m personally happy to discuss underlying moral theories. This is why I’m looking forward to your answer to MichaelStJules’ question which points out your contractualist theory may lead to special moral concern for, as he puts it, “fetuses, embryos, zygotes and even uncombined sperm cells and eggs”. This would then have a whole host of strongly pro-life and pro-natalist implications.
In another comment thread I asked a specific question to understand your underlying moral theory better which enabled you to helpfully elaborate on it. I was then able to conclude I did not align with your moral theory due to the conclusions it led to, and so could discount the conclusions you draw from that theory. My question also lead to a very good, probing question from MichaelStJules which you didn’t answer. I found this back and forth very helpful as the specific questions uncovered underlying reasons behind our disagreement.
Personally, I hope going forward you respect the LLM’s advice and refrain from posting LLM outputs directly, instead opting to use LLM responses to develop your own considered response. I think that makes for a better discussion. Indeed this comment is an example of this as I made use of the LLM response I recently posted.
I think animals very likely don’t have that kind of experience
Why?
Can you expand on why you don’t think most animals are moral patients?
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
So, the argument you’re making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like it’s a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and it’s important to use them carefully to avoid alienating others.