Feel free to message me on here.
JackM
Depends if there’s a better option. I agree with MichaelStJules when he says “’Meat eating problem’ seems likely to be understood too generally as the problem of animal consumption.” The other proposed options don’t seem that great to me because they seem to abstract too far away from the issue of saving lives which is at the core of the problem.
It’s worth noting there is a cost to changing the name of something. You’ll then have the exact same thing referred to by different names in different places which can lead to confusion. Also it’s very hard to get a whole community to change the way they refer to something that has been around for a while.
With regards to the “persuasion” point—I think the issue is that the problem we are talking about is inherently uncomfortable. We’re talking about how saving human lives may not be as good as we think it is because humans cause suffering to animals. This is naturally going to be hard for a lot of people to swallow the second you explain it to them, and I don’t think putting a nicer name on it is going to change that.
With regard to fairness…this is my personal view but this doesn’t bother me much. I don’t see evidence of individuals in lower income countries caring about the language we use on the EA Forum which is what would ultimately influence me on this point.
I’m aware I’m in the extreme minority here and I might be wrong. I fully expect to get further downvotes but if people disagree I would welcome pushback in the form of replies.
Accuracy: I don’t think the core problem actually the people who’s lives we are saving, its that they then eat meat and cause suffering. I think its important to separate the people from the core problem as this better helps us consider possible solutions
The main takeaway of the ‘meat eater problem’ (sorry!) is to reassess the cost-effectiveness of saving human lives, not necessarily to argue that we should focus on reducing animal consumption in lower-income countries. While reducing animal consumption is important, that’s not typically the central takeaway from this specific ‘problem’.
In this sense, the saving lives aspect is more central to the problem than the meat consumption aspect, though both are pivotal. So, in a purely logical sense, the term ‘meat eater problem’ might actually be more accurate.
Sorry that is fair, I think I assumed too much about your views.
What is your preferred moral theory out of interest?
When you say top 1 percent of animal-friendly-moral theory humans but maybe in the bottom third of EAs, is this just say hedonism but with moral weights that are far less animal-friendly than say RP’s?
Again I think it depends on what we mean by an animal-friendly moral theory or a pro-global health moral theory. I’d be surprised though if many people hold a pro-global health moral theory but still favor animal welfare over global health. But maybe I’m wrong.
I’m not sure what the scope of “similarly-animal-friendly theories” is in your mind. For me I suppose it’s most if not all consequentialist / aggregative theories that aren’t just blatantly speciesist. The key point is that the number of animals suffering (and that we can help) completely dwarfs the number of humans. Also, as MichaelStJules says, I’m pretty sure animals have desires and preferences that are being significantly obstructed by the conditions humans impose on them.
I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories. You seem to think that it’s because they are making this simple mistake with the multiplier argument, with your evidence being that loads of people are citing the RP moral weights project. I suppose I’m not sure which of us is correct, but I would point out that people may just find the moral weights project important because they have some significant credence in hedonism.
You preface this post as being an argument for Global Health, but it isn’t necessarily. As you say in the conclusion it is a call not to “focus on upside cases and discard downside ones as the Multiplier Argument pushes you to do”. For you this works in favor of global health, for others it may not. Anyone who think along the lines of “how can anyone even consider funding animal welfare over global health, animals cannot be fungible with humans!”, or similar, will have this argument pull them closer to the animal welfare camp.
In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
I take on board this is just a toy example, but I wonder how relevant it is. For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing animal welfare (most prominent of which is hedonism). I think this is primarily what drove animal welfare clearly beating global health in the voting. So the “50/50” in the toy example might be a bit misleading, but I would be interested in polling the EA community to understand their moral views.
You can counter this and say that people still aren’t factoring in that global health destroys animal welfare on pretty much any other moral view, but is this true? This needs more justification as per MichaelStJules’ comment.
Even if it is true, is it fair to say that non-hedonic moral theories favor global health over animal welfare to a greater extent than hedonism favors animal welfare over global health? That claim is essentially doing all the work in your toy example, but seems highly uncertain/questionable.
I think you may be departing from strong longtermism. The first proposition for ASL is “Every option that is near-best overall is near-best for the far future.” We are talking about making decisions whose outcome is one of the best things we can do for the far future. It’s not merely something that is better than something deemed terrible.
I think you have misunderstood this. An option can be the best thing you can do because it averts a terrible outcome, as opposed to achieving the best possible outcome. For example, if we are at risk of entering a hellscape that will last for eternity and you can press a button to simply stop that from happening, that seems to me like it would be the single best thing anyone can do (overall or for the far future). The end result however would just be a continuation of the status quo. This is the concept of counterfactual impact—we compare the world after our intervention to the world that would have happened in the absence of the intervention and the difference in value is essentially how good the intervention was. Indeed a lot of longtermists simply want to avert s-risks (risks of astronomical suffering).
I don’t understand some of what you’re saying including on ambiguity. I don’t find it problematic to say that the US winning the race to superintelligence is better in expectation than China winning. China has authoritarian values, so if they control the world using superintelligence they are more likely to control it according to authoritarian values, which means less freedom, but freedom is important for wellbeing etc. etc. I think we can say, if we assume persistence, that future people would more likely be thankful the US won the race to superintelligence than China did. I am extrapolating that future people will also like freedom. Could I be wrong, sure, but we are doing things based on expectation.
I would say that your doubts about persistence are the best counter to longtermism. The claim that superintelligence may allow a state to control the world for a very long time is perhaps a more controversial one, but not one I am willing to discount. If you want to engage with object-level arguments on this point check out this document: Artificial General Intelligence and Lock-In.
Cells, atoms and neurons aren’t conscious entities in themselves. I see no principled reason for going to that level for an uninformed prior.
A true uninformed prior would probably say “I have no idea” but if we’re going to have some idea it seems more natural to start at all sentient individuals having equal weight. The individual is the level at which conscious experience happens, not the cell/atom/neuron.
I don’t think I have explained this well enough. I’d be happy to have a call sometime if you want as that might be more efficient than this back and forth. But I’ll reply for now.
b) These near future states that will endure for a long time will be the best states for the beings in the far future.
No. This is not what I’m saying.
The key thing is that there are two attractor states that differ in value and you can affect if you end up in one or the other. The better one does not have to be the best possible state of the world, it just has to be better than the other attractor state.
So if you achieve the better one you persist at that higher expected value for a very long time compared to the counterfactual of persisting at the lower value for a very long time. So even if the difference in value (at any given time) is kind of small, the fact that this difference persists for a very long time is what gives you the very large counterfactual impact.
That means that the state that is hypothesised to last for an extremely long time, is a state that is close to the present state.
Not necessarily. To use the superintelligence example, the world will look radically different under either the US or China having superintelligence than it does now.
For example, how exactly does the US winning the race to superintelligence lead to one of the best possible futures for quadrillions of people in the far future? How long is this state expected to last?
As I said earlier it doesn’t necessarily lead to one of the best futures, but to cover the persistence point—this is a potentially fair push back. Some people doubt the persistence of longtermist interventions/attractor states, which would then dampen the value of longtermist interventions. We can still debate the persistence of different states of the world though and many think that a government controlling superintelligence would become very powerful and so be able to persist for a long time (exactly how long I don’t know but “long time” is all we really need for it to become an important question).
What exactly is claimed to be persisting for a very long time? The US having dominance over the world? Liberty? Democracy? Wellbeing? And whatever it is, how is that influencing the quadrillions of lives in the far future, given that there is still a large subset of X which is changing.
Yeah I guess in this case I’m talking about the US having dominance over the world as opposed to China having dominance over the world. Remember I’m just saying one attractor state is better than the other in expectation, not that one of them is so great. I think it’s fair to say I’d rather the US control the world than China control the world given the different values the two countries hold. Leopold Aschenbrenner talks more about this here. Of course I can’t predict the future precisely, but we can talk about expectations.
You haven’t factored in the impact of saving a life on fertility. Check out this literature review which concludes the following (bold emphasis mine):
I think the best interpretation of the available evidence is that the impact of life-saving interventions on fertility and population growth varies by context, above all with total fertility, and is rarely greater than 1:1 [meaning that averting a death rarely causes a net drop in population]. In places where lifetime births/woman has been converging to 2 or lower, family size is largely a conscious choice, made with an ideal family size in mind, and achieved in part by access to modern contraception. In those contexts, saving one child’s life should lead parents to avert a birth they would otherwise have. The impact of mortality drops on fertility will be nearly 1:1, so population growth will hardly change.
Also you’re assuming neuron count should be used as proxies for moral weight but I’m highly skeptical that is fair (see this).
I think the question of prioritization of human welfare versus animal welfare should be approached from a “philosopher” mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.
There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don’t like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection).
I’m personally happy to discuss underlying moral theories. This is why I’m looking forward to your answer to MichaelStJules’ question which points out your contractualist theory may lead to special moral concern for, as he puts it, “fetuses, embryos, zygotes and even uncombined sperm cells and eggs”. This would then have a whole host of strongly pro-life and pro-natalist implications.
In another comment thread I asked a specific question to understand your underlying moral theory better which enabled you to helpfully elaborate on it. I was then able to conclude I did not align with your moral theory due to the conclusions it led to, and so could discount the conclusions you draw from that theory. My question also lead to a very good, probing question from MichaelStJules which you didn’t answer. I found this back and forth very helpful as the specific questions uncovered underlying reasons behind our disagreement.
Personally, I hope going forward you respect the LLM’s advice and refrain from posting LLM outputs directly, instead opting to use LLM responses to develop your own considered response. I think that makes for a better discussion. Indeed this comment is an example of this as I made use of the LLM response I recently posted.
I think animals very likely don’t have that kind of experience
Why?
Can you expand on why you don’t think most animals are moral patients?
I have downvoted the LLM answers. I don’t like your approach of simply posting long conversations with LLMs on a forum for various reasons. Firstly, your prompts are such that the LLM provides very broad answers that don’t go very deep into specific points and often don’t engage with the specific arguments people have put forward. Secondly, your prompts are worded in a leading, biased way.
Here is an LLM opining on this very question (I know this is hypocritical but I thought it would be an amusing and potentially effective way to illustrate the point). Note the conclusion saying “leverage the LLM as a tool, not as a crutch”.
Me: In an online debate someone is posting their conversations with a large language model to articulate key points. Is this a helpful approach?
ChatGPT: Using a large language model (LLM) to articulate key points in an online debate can be helpful in several ways, but it also has limitations. Here’s a breakdown of both the advantages and potential drawbacks:
Helpful Aspects:
Clarification and Articulation: An LLM can help users articulate complex ideas more clearly. If someone is struggling to express a nuanced point, an LLM can assist in providing structured arguments or phrasing that makes their case more understandable.
Information Gathering: LLMs can provide quick access to relevant information, sources, and arguments, which can be useful in a debate. The user can get an overview of a topic or even a counterargument to refine their position.
Balanced Perspectives: LLMs are often trained to offer balanced or well-rounded views on contentious topics. This can help individuals see both sides of an argument, leading to a more informed debate.
Speed and Efficiency: Instead of spending time crafting long responses, individuals can use an LLM to generate content quickly. This can help sustain momentum in fast-paced online debates.
Drawbacks:
Over-reliance on AI: There is a risk that someone using LLM responses might fail to engage deeply with the topic themselves, which can limit genuine intellectual exchange. If they’re simply copying and pasting, the discussion can lose the personal insight or emotional nuance that human debates often require.
Lack of Accountability: Since LLMs generate text based on patterns in data, they might present information that is not entirely accurate, up-to-date, or contextually appropriate. If someone relies heavily on these responses, they may inadvertently spread misinformation or make arguments that aren’t fully grounded in fact.
Detachment from Personal Viewpoint: Debates are often about more than just the articulation of key points—they’re about personal beliefs, experiences, and the nuance of human emotion. Using an LLM could distance a participant from their personal views, making the discussion feel less genuine or relatable.
Ethical Concerns: Not everyone may agree with using an AI-generated argument in a debate, particularly if the other side feels that they are debating a machine rather than a person. This could raise issues about transparency and fairness in discourse.
Overall, using LLMs can be helpful if integrated thoughtfully into personal arguments, but relying on them exclusively may weaken the authenticity and intellectual engagement of a debate. The key is balance: leveraging the LLM as a tool, not as a crutch.
OK thanks for your perspective, although it doesn’t seem convincing to me. I could be more convinced by an argument that inequality / poverty in rich countries results in poor decision-making in those same rich countries.
If extinction and non-extinction are “attractor states”, from what I gather, a state that is expected to last an extremely long time, what exactly isn’t an attractor state?
Any state that isn’t very persistent. For example, an Israel-Gaza ceasefire. We could achieve it, but from history we know it’s unlikely to last very long. The fact that it is unlikely to last makes it less desirable to work towards than if we were confident it would last a long time.
The extinction vs non-extinction example is the classic attractor state example, but not the only one. Another one people talk about is stable totalitarianism. Imagine China or the US can win the race to superintelligence. Which country wins the race essentially controls the world for a very long time given how powerful superintelligence would be. So we have two different attractor states—one where China wins and has long-term control and one where the US wins and has long-term control. Longtermist EAs generally think the state where the US wins is the much better one—the US is a liberal democracy whereas China is an authoritarian state. So if we just manage to ensure the US wins we would experience the better state for a very long time, which seems very high value.
There are ways to counter this. You can argue the states aren’t actually that persistent e.g. you don’t think superintelligence is that powerful or even realistic in the first place. Or you can argue one isn’t clearly better than the other. Or you can argue that there’s not much we can do to achieve one state over other. You touch on this last point when you say that longtermist interventions may be subject to washing out themselves, but it’s important to note that longtermist interventions often aim to achieve short-term outcomes that persist into the long-term, as opposed to long-term outcomes (I explain this better here).
Saving a life through bed nets just doesn’t seem to me to put the world in a better attractor state which makes it vulnerable to washing out. Medical research doesn’t either.
I’m skeptical of this link between eradicating poverty and reducing AI risk. Generally richer countries’ governments are not very concerned about extreme poverty. To the extent that they are, it is the remit of certain departments like USAID that have little if any link to AI development. If we have an AI catastrophe it is probably going to be the fault of a leading AI lab like OpenAI and/or the relevant regulators or legislators not doing their job well enough. I just don’t see why these actors would do any better just because there is no extreme poverty halfway across the world—as I say, global poverty is way down their priority list if it is on it at all.
I’m surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just “not counting”—I’ve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesn’t seem to have provided any justification (from what I’ve seen) for the claim that animals don’t have relevant experiences that make them moral patients. He simply asserts this as his view. It’s not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I don’t think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didn’t impress me:
I wish I remembered this better, but he made some sort of assertion that animals don’t have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if I’m misremembering anything here).
He didn’t respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.