I have downvoted the LLM answers. I don’t like your approach of simply posting long conversations with LLMs on a forum for various reasons. Firstly, your prompts are such that the LLM provides very broad answers that don’t go very deep into specific points and often don’t engage with the specific arguments people have put forward. Secondly, your prompts are worded in a leading, biased way.
Here is an LLM opining on this very question (I know this is hypocritical but I thought it would be an amusing and potentially effective way to illustrate the point). Note the conclusion saying “leverage the LLM as a tool, not as a crutch”.
Me: In an online debate someone is posting their conversations with a large language model to articulate key points. Is this a helpful approach?
ChatGPT: Using a large language model (LLM) to articulate key points in an online debate can be helpful in several ways, but it also has limitations. Here’s a breakdown of both the advantages and potential drawbacks:
Helpful Aspects:
Clarification and Articulation: An LLM can help users articulate complex ideas more clearly. If someone is struggling to express a nuanced point, an LLM can assist in providing structured arguments or phrasing that makes their case more understandable.
Information Gathering: LLMs can provide quick access to relevant information, sources, and arguments, which can be useful in a debate. The user can get an overview of a topic or even a counterargument to refine their position.
Balanced Perspectives: LLMs are often trained to offer balanced or well-rounded views on contentious topics. This can help individuals see both sides of an argument, leading to a more informed debate.
Speed and Efficiency: Instead of spending time crafting long responses, individuals can use an LLM to generate content quickly. This can help sustain momentum in fast-paced online debates.
Drawbacks:
Over-reliance on AI: There is a risk that someone using LLM responses might fail to engage deeply with the topic themselves, which can limit genuine intellectual exchange. If they’re simply copying and pasting, the discussion can lose the personal insight or emotional nuance that human debates often require.
Lack of Accountability: Since LLMs generate text based on patterns in data, they might present information that is not entirely accurate, up-to-date, or contextually appropriate. If someone relies heavily on these responses, they may inadvertently spread misinformation or make arguments that aren’t fully grounded in fact.
Detachment from Personal Viewpoint: Debates are often about more than just the articulation of key points—they’re about personal beliefs, experiences, and the nuance of human emotion. Using an LLM could distance a participant from their personal views, making the discussion feel less genuine or relatable.
Ethical Concerns: Not everyone may agree with using an AI-generated argument in a debate, particularly if the other side feels that they are debating a machine rather than a person. This could raise issues about transparency and fairness in discourse.
Overall, using LLMs can be helpful if integrated thoughtfully into personal arguments, but relying on them exclusively may weaken the authenticity and intellectual engagement of a debate. The key is balance: leveraging the LLM as a tool, not as a crutch.
I have upvoted your use of an LLM because this comment is more thoughtful, balanced, and relevant than your average comment. And much more so than the average commenter’s comment in this particular comment thread. I normally don’t post LLM outputs directly, but this comment thread is so full of unconsidered and unelaborated-upon opinions, I figured this would be a rare place in which the LLM mediocrity would be an convenient way to raise the average quality of the content. My hope was to stimulate thought and debate; to initiate a conversation, not to provide a conclusion to a debate.
In another comment thread I asked a specific question to understand your underlying moral theory better which enabled you to helpfully elaborate on it. I was then able to conclude I did not align with your moral theory due to the conclusions it led to, and so could discount the conclusions you draw from that theory. My question also lead to a very good, probing question from MichaelStJules which you didn’t answer. I found this back and forth very helpful as the specific questions uncovered underlying reasons behind our disagreement.
Personally, I hope going forward you respect the LLM’s advice and refrain from posting LLM outputs directly, instead opting to use LLM responses to develop your own considered response. I think that makes for a better discussion. Indeed this comment is an example of this as I made use of the LLM response I recently posted.
I am delighted by Michael’s comments and intend to reply to them all once I’ve had the chance to carefully examine and consider his linked materials.
Overall, I feel quite disappointed in this comment thread for being in what I would call an “activist” mindset, where the correctness of one’s view is taken for granted, and the focus is on practical details of bringing about change in the world in accordance with this view.
I think the question of prioritization of human welfare versus animal welfare should be approached from a “philosopher” mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.
Michael StJules is one of the few animal welfare advocates I’ve encountered who is willing to engage on this philosophical level.
Here’s some quotes from elsewhere in this comment section that I think exemplify what I mean by activist mindset rather than philosopher mindset:
(Single line separators indicate the comments were in a thread responding to each other)
emre kaplan
Disclaimer: I’m funded by EA for animal welfare work.
Some thoughts:
a. So much of the debate feels like a debate on identities and values. I’d really love to see people nitpicking into technical details of cost-effectiveness estimates instead.
… (Truncated)
Ariel Simnegar
So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF.
I’ve run into a similar dilemma before, where I’m trying to convince non-EAs to direct some of their charity to AMF rather than their favorite local charity. I believe animal welfare charities are orders of magnitude more cost-effective than AMF, so it’s probably higher EV to try to convince them to direct that charity to e.g. THL rather than AMF. But that request is much less likely to succeed, and could also alienate them (because animal welfare is “weird”) from making more effective donations in the future. Curious about your thoughts about the best way to approach that.
CB
Another option, if they’re sensible to the environment, is to redirect them to charities that are also impactful for sustainability, such as The Good Food Institute. According to the best guess by Giving Green, they can avoid 17 tons of CO2eq for 50$.
This way, they can make a positive contribution for the environment (not to mention the positive impact on human health pandemics).
I’ve done it for a charity that does similar stuff in my country and at the very least people didn’t give any pushback and seemed comprehensive. You can mention concrete stuff about the progress of alternative proteins like they’re the default choice at burger king.
Jason
I have a sense that there could be a mutually beneficial trade between cause areas lurking in this kind of situation, but it would be tricky to pull off as a practical manner.
One could envision animal-welfare EAs nudging non-EA donors toward GiveWell-style charities when they feel that is the highest-EV option with a reasonable probability of success, and EA global-health donors paying them a “commission” of sorts by counterfactually switching some smaller sum of their own donations from GH to AW.
In addition to challenges with implementation, there would be a potential concern that not as much net money is going to GH as the non-EA donor thinks. On the other hand, funging seems to be almost an inevitable part of the charitable landscape whether it is being done deliberately or not.
Ben Millwood
Yeah, this seems a little… sneaky, for want of a better word. It might be useful to imagine how you think the non-EA donors would feel if the “commission” were proactively disclosed. (Not necessarily terribly! After all, fundraising is often a paid job. Just seems like a useful intuition prompt.)
Stijn
“So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF.” More generally, I think it is more important to convince an EA human health and development supporter to diversify and donate say 50% of the donation budget to the most effective animal welfare causes, than to convince a non-EA human charity supporter to diversify and donate say 50% of the donation budget to AMF or similar high-impact human-focused charities.
As an aside, I don’t think someone writing an “activist” comment disqualifies them from being truthseeking.
I used to find it absurd to think one could justify spending on animals when they could be spending on humans. Over years, I changed my mind, between discussing consciousness and moral weights with others, reading many relevant writings, and watching relevant documentaries. I wrote a post explaining why I changed my mind, and engaged extensively with hundreds of comments.
So far, nobody has posed an argument for prioritizing global health over animal welfare which I’ve found convincing. If the case for animal welfare is indeed correct, then marginal global health funding could be doing orders of magnitude more good if instead allocated to animal welfare. I don’t think it means I have bad epistemics, or that my writings aren’t worth engaging with, if my actions are following the logical conclusions of my changed beliefs.
If global health is indeed better at the margin than animal welfare, then I would love to know, because that would mean I’ve been causing enormous harm by allocating my time and donations to preventing us from reducing more suffering. I strive to remain as open-minded as I can to that possibility, but for reasons I and others have written extensively about, I currently think it’s very likely indeed that animal welfare is better at the margin.
I agree that “activist” comments don’t imply that someone isn’t truthseeking. I think that whether an activist mindset or a philosophical mindset should be brought to bear on a given problem is highly context dependent.
I was trying to make the point that I was disappointed that the responses to this question of cause prioritization (human welfare vs animal welfare) seemed to be predominantly activist mindset oriented. To me, it seems this question is a context that, at the very least, requires a balance of philosophy and activism, if not predominantly philosophy. This interpretation is, I think, supported by this question being asked in the context of a “debate week”, where the implied goal is for us to explain our viewpoints and attempt to resolve our differences in worldviews.
An example of a question where I would be disappointed to see predominantly philosophical debate instead of activist planning would be: “Given the assumption that there is a 1:1e6 moral value tradeoff for cows to shrimp, and how best should we allocate a budget of 1 million dollars between this set of existing charities: (list of charities).”
To respond to a question like that with philosophical debate of the premise would seem off-topic to me. The question specifies a premise, and if you want to fight the hypothesis you ought to initiate an entirely separate conversation.
In your specific case, Ariel, I’d like to thank you for your above comment explaining your philosophical journey and giving links to sources you found influential. This is exactly the sort of comment I would like to see in a conversation like this. I will take the time to read what you have linked, and think carefully about it, then get back to you on where your info has changed my mind and where I might still disagree.
Very good answer. I have also followed the same path, from donating to the AMF to switching to supporting animal welfare works since it helps more beings.
I think the first step was learning more about the terrible ways animals. I read “Eating Animals” when I was 18 which informed me of this. I really liked his approach of “food and tradition are important for me, but this is an important enough topic that I should dig into it”.
This didn’t trigger many donations or any “activism” but it made me go vegetarian. At some point I was eating shrimp in a meal and for some reason I visualised the shrimp in my mind, going about its life in the sea. And I was like “I don’t want to kill them. If they were in front of me I wouldn’t kill them”.
Fast forward a few years, I was donating to both the AMF and some animal charities—basically doing the default stuff regarding EA donating. But I spent a lot of time comparing different cause areas between them. And I could see that the number to save a human life was super high (5000 per life), and the number to save an animal and spare them a life of torture was dirt cheap (less than 1 dollar). So naturally, since my goal is to help the largest number of beings, I redirected my efforts and money toward animals. I also changed the topics I worked on (my main topic was mostly environmental stuff).
I started with supporting standard cage-free commitment, but completed that by gradually helping more neglected and numerous animals (e.g. donating to the shrimp welfare project), because I didn’t find a good enough reason saying that smaller animals do not matter as much, beyond our basic “this feels weird” bias. Sure there’s a possibility they’re not sentient, but I simply don’t see why evolution wouldn’t have implemented a mechanism as useful as pain in other beings. We have millions of years of common evolutionary history, and behavioural evidence clearly indicate pain and panic when animals are attacked.
I still updated downward towards Rethink priorities’s moral weight because they did much more research than me on that.
The basic argument is pretty simple : animals are much more numerous, they suffer much worse conditions, less people are helping them, and we can do tractable stuff there.
I think the question of prioritization of human welfare versus animal welfare should be approached from a “philosopher” mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.
There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don’t like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection).
I’m personally happy to discuss underlying moral theories. This is why I’m looking forward to your answer to MichaelStJules’ question which points out your contractualist theory may lead to special moral concern for, as he puts it, “fetuses, embryos, zygotes and even uncombined sperm cells and eggs”. This would then have a whole host of strongly pro-life and pro-natalist implications.
Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
FWIW, this is a consequence of non-aggregation. You can have a fully aggregative or even additive contractualist view, and it would not have this implication. It could be basically utilitarian with respect to moral agents (and excluding conscious beings who aren’t also moral agents). But contractualism is usually not aggregative, AFAIK.
I have downvoted the LLM answers. I don’t like your approach of simply posting long conversations with LLMs on a forum for various reasons. Firstly, your prompts are such that the LLM provides very broad answers that don’t go very deep into specific points and often don’t engage with the specific arguments people have put forward. Secondly, your prompts are worded in a leading, biased way.
Here is an LLM opining on this very question (I know this is hypocritical but I thought it would be an amusing and potentially effective way to illustrate the point). Note the conclusion saying “leverage the LLM as a tool, not as a crutch”.
I have upvoted your use of an LLM because this comment is more thoughtful, balanced, and relevant than your average comment. And much more so than the average commenter’s comment in this particular comment thread. I normally don’t post LLM outputs directly, but this comment thread is so full of unconsidered and unelaborated-upon opinions, I figured this would be a rare place in which the LLM mediocrity would be an convenient way to raise the average quality of the content. My hope was to stimulate thought and debate; to initiate a conversation, not to provide a conclusion to a debate.
In another comment thread I asked a specific question to understand your underlying moral theory better which enabled you to helpfully elaborate on it. I was then able to conclude I did not align with your moral theory due to the conclusions it led to, and so could discount the conclusions you draw from that theory. My question also lead to a very good, probing question from MichaelStJules which you didn’t answer. I found this back and forth very helpful as the specific questions uncovered underlying reasons behind our disagreement.
Personally, I hope going forward you respect the LLM’s advice and refrain from posting LLM outputs directly, instead opting to use LLM responses to develop your own considered response. I think that makes for a better discussion. Indeed this comment is an example of this as I made use of the LLM response I recently posted.
I am delighted by Michael’s comments and intend to reply to them all once I’ve had the chance to carefully examine and consider his linked materials.
Overall, I feel quite disappointed in this comment thread for being in what I would call an “activist” mindset, where the correctness of one’s view is taken for granted, and the focus is on practical details of bringing about change in the world in accordance with this view.
I think the question of prioritization of human welfare versus animal welfare should be approached from a “philosopher” mindset. We must determine the meaning and moral weight of suffering in humans and non-humans before we can know how to weigh the causes relative to each other.
Michael StJules is one of the few animal welfare advocates I’ve encountered who is willing to engage on this philosophical level.
Here’s some quotes from elsewhere in this comment section that I think exemplify what I mean by activist mindset rather than philosopher mindset: (Single line separators indicate the comments were in a thread responding to each other)
emre kaplan
Disclaimer: I’m funded by EA for animal welfare work.
Some thoughts:
a. So much of the debate feels like a debate on identities and values. I’d really love to see people nitpicking into technical details of cost-effectiveness estimates instead.
… (Truncated)
Ariel Simnegar
I’ve run into a similar dilemma before, where I’m trying to convince non-EAs to direct some of their charity to AMF rather than their favorite local charity. I believe animal welfare charities are orders of magnitude more cost-effective than AMF, so it’s probably higher EV to try to convince them to direct that charity to e.g. THL rather than AMF. But that request is much less likely to succeed, and could also alienate them (because animal welfare is “weird”) from making more effective donations in the future. Curious about your thoughts about the best way to approach that.
CB
Another option, if they’re sensible to the environment, is to redirect them to charities that are also impactful for sustainability, such as The Good Food Institute. According to the best guess by Giving Green, they can avoid 17 tons of CO2eq for 50$.
This way, they can make a positive contribution for the environment (not to mention the positive impact on human health pandemics).
I’ve done it for a charity that does similar stuff in my country and at the very least people didn’t give any pushback and seemed comprehensive. You can mention concrete stuff about the progress of alternative proteins like they’re the default choice at burger king.
Jason
I have a sense that there could be a mutually beneficial trade between cause areas lurking in this kind of situation, but it would be tricky to pull off as a practical manner.
One could envision animal-welfare EAs nudging non-EA donors toward GiveWell-style charities when they feel that is the highest-EV option with a reasonable probability of success, and EA global-health donors paying them a “commission” of sorts by counterfactually switching some smaller sum of their own donations from GH to AW.
In addition to challenges with implementation, there would be a potential concern that not as much net money is going to GH as the non-EA donor thinks. On the other hand, funging seems to be almost an inevitable part of the charitable landscape whether it is being done deliberately or not.
Ben Millwood
Yeah, this seems a little… sneaky, for want of a better word. It might be useful to imagine how you think the non-EA donors would feel if the “commission” were proactively disclosed. (Not necessarily terribly! After all, fundraising is often a paid job. Just seems like a useful intuition prompt.)
Stijn
“So it is more important to convince someone to give to e.g. the EA animal welfare fund if they were previously giving to AMF than to convince a non-donor to give that same amount of money to AMF.” More generally, I think it is more important to convince an EA human health and development supporter to diversify and donate say 50% of the donation budget to the most effective animal welfare causes, than to convince a non-EA human charity supporter to diversify and donate say 50% of the donation budget to AMF or similar high-impact human-focused charities.
As an aside, I don’t think someone writing an “activist” comment disqualifies them from being truthseeking.
I used to find it absurd to think one could justify spending on animals when they could be spending on humans. Over years, I changed my mind, between discussing consciousness and moral weights with others, reading many relevant writings, and watching relevant documentaries. I wrote a post explaining why I changed my mind, and engaged extensively with hundreds of comments.
So far, nobody has posed an argument for prioritizing global health over animal welfare which I’ve found convincing. If the case for animal welfare is indeed correct, then marginal global health funding could be doing orders of magnitude more good if instead allocated to animal welfare. I don’t think it means I have bad epistemics, or that my writings aren’t worth engaging with, if my actions are following the logical conclusions of my changed beliefs.
If global health is indeed better at the margin than animal welfare, then I would love to know, because that would mean I’ve been causing enormous harm by allocating my time and donations to preventing us from reducing more suffering. I strive to remain as open-minded as I can to that possibility, but for reasons I and others have written extensively about, I currently think it’s very likely indeed that animal welfare is better at the margin.
I agree that “activist” comments don’t imply that someone isn’t truthseeking. I think that whether an activist mindset or a philosophical mindset should be brought to bear on a given problem is highly context dependent.
I was trying to make the point that I was disappointed that the responses to this question of cause prioritization (human welfare vs animal welfare) seemed to be predominantly activist mindset oriented. To me, it seems this question is a context that, at the very least, requires a balance of philosophy and activism, if not predominantly philosophy. This interpretation is, I think, supported by this question being asked in the context of a “debate week”, where the implied goal is for us to explain our viewpoints and attempt to resolve our differences in worldviews.
An example of a question where I would be disappointed to see predominantly philosophical debate instead of activist planning would be: “Given the assumption that there is a 1:1e6 moral value tradeoff for cows to shrimp, and how best should we allocate a budget of 1 million dollars between this set of existing charities: (list of charities).” To respond to a question like that with philosophical debate of the premise would seem off-topic to me. The question specifies a premise, and if you want to fight the hypothesis you ought to initiate an entirely separate conversation.
In your specific case, Ariel, I’d like to thank you for your above comment explaining your philosophical journey and giving links to sources you found influential. This is exactly the sort of comment I would like to see in a conversation like this. I will take the time to read what you have linked, and think carefully about it, then get back to you on where your info has changed my mind and where I might still disagree.
Very good answer. I have also followed the same path, from donating to the AMF to switching to supporting animal welfare works since it helps more beings.
Are there specific sources or arguments which you recall as being the key influences in you changing your mind?
Good question.
I think the first step was learning more about the terrible ways animals. I read “Eating Animals” when I was 18 which informed me of this. I really liked his approach of “food and tradition are important for me, but this is an important enough topic that I should dig into it”.
This didn’t trigger many donations or any “activism” but it made me go vegetarian. At some point I was eating shrimp in a meal and for some reason I visualised the shrimp in my mind, going about its life in the sea. And I was like “I don’t want to kill them. If they were in front of me I wouldn’t kill them”.
Fast forward a few years, I was donating to both the AMF and some animal charities—basically doing the default stuff regarding EA donating. But I spent a lot of time comparing different cause areas between them. And I could see that the number to save a human life was super high (5000 per life), and the number to save an animal and spare them a life of torture was dirt cheap (less than 1 dollar). So naturally, since my goal is to help the largest number of beings, I redirected my efforts and money toward animals. I also changed the topics I worked on (my main topic was mostly environmental stuff).
I started with supporting standard cage-free commitment, but completed that by gradually helping more neglected and numerous animals (e.g. donating to the shrimp welfare project), because I didn’t find a good enough reason saying that smaller animals do not matter as much, beyond our basic “this feels weird” bias. Sure there’s a possibility they’re not sentient, but I simply don’t see why evolution wouldn’t have implemented a mechanism as useful as pain in other beings. We have millions of years of common evolutionary history, and behavioural evidence clearly indicate pain and panic when animals are attacked.
I still updated downward towards Rethink priorities’s moral weight because they did much more research than me on that.
The basic argument is pretty simple : animals are much more numerous, they suffer much worse conditions, less people are helping them, and we can do tractable stuff there.
Regarding resources, I don’t have a go-to one, but here are some good ones:
- On emotionally connecting with the topic : https://forum.effectivealtruism.org/posts/xtcgsLA2G8bn8vj99/reminding-myself-just-how-awful-pain-can-get-plus-an
- On sentience : https://sites.google.com/nyu.edu/nydeclaration/declaration
Maybe also animal liberation now ? I heard it’s pretty good.
There are plenty of in-depth discussions on the topic of moral weights. But it seems your preferred moral theory is contractualism which I understand leaves the question of moral weights somewhat redundant.
There was this post on contractualism arguing it leads to global health beating animal welfare. The problem for you is that many are attracted to EA precisely because of impartiality and so have already decided they don’t like contractualism and its conclusions. Check out this comment which points out that contractualism can favor spending a billion dollars saving one life for certain over spending the same amount of money to almost certainly save far more lives. A conclusion like this just seems antithetical to EA.
If you want to argue what we should do under a contractualist moral theory you can do it here, you just might not get as much engagement as on other philosophy-related forums as a lot of people here have already decided they are consequentialist (often after deep reflection).
I’m personally happy to discuss underlying moral theories. This is why I’m looking forward to your answer to MichaelStJules’ question which points out your contractualist theory may lead to special moral concern for, as he puts it, “fetuses, embryos, zygotes and even uncombined sperm cells and eggs”. This would then have a whole host of strongly pro-life and pro-natalist implications.
FWIW, this is a consequence of non-aggregation. You can have a fully aggregative or even additive contractualist view, and it would not have this implication. It could be basically utilitarian with respect to moral agents (and excluding conscious beings who aren’t also moral agents). But contractualism is usually not aggregative, AFAIK.