Eliezerâs perspective on animal consciousness is especially frustrating because of the real harm itâs caused to rationalistsâ openness to caring about animal welfare.
Rationalists are much more likely than highly engaged EAs to either dismiss animal welfare outright, or just not think about it since AI x-risk is âobviouslyâ more important. (For a case study, just look at how this authorâs post on fish farming was received between the EA Forum and LessWrong.) Eliezer-style arguments about the âimplausibilityâ of animal suffering abound. Discussions of the implications of AI outcomes on farmed or wild animals (i.e. almost all currently existing sentient beings) are few and far between.
Unlike Eliezerâs overconfidence in physicalism and FDT, Eliezerâs overconfidence in animals not mattering has serious real-world effects. Eliezerâs views have huge influence on rationalist culture, which has significant influence on those who could steer future TAI. If the alignment problem will be solved, itâll be really important for those who steer future TAI to care about animals, and be motivated to use TAI to improve animal welfare.
I would very much prefer it if one didnât appeal to the consequences of the belief about animal moral patienthood, and instead argue whether animals in fact are moral patients or not, or whether the question is well-posed.
For this reason, I have strong-downvoted your comment.
Thanks for describing your reasons. My criterion for moral patienthood is described by this Brian Tomasik quote:
When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only âconditionâ of my love is sentience.
Many other criteria for moral patienthood which exclude animals have been proposed. These criteria always suffer from some combination of the following:
Arbitrariness. For example, âhuman DNA is the criterion for moral patienthoodâ is just as arbitrary as âEuropean DNA is the criterion for moral patienthoodâ.
Exclusion of some humans. For example, âhigh intelligence is the criterion for moral patienthoodâ excludes people who have severe mental disabilities.
Exclusion of hypothetical beings. For example, âhuman DNA is the criterion for moral patienthoodâ would exclude superintelligent aliens and intelligent conscious AI. Also, if some people you know were unknowingly members of a species which looked/âacted much like humans but had very different DNA, they would suddenly become morally valueless.
Collapsing to sociopathy or nihilism. For example, âanimals donât have moral patienthood because we have power over themâ is just nihilism, and if a person used that justification to act the way we do towards farmed animals towards other humans, theyâd be locked up.
The most parsimonious definition of moral patient Iâve seen proposed is just âa sentient beingâ. I donât see any reason why I should add complexity to that definition in order to exclude nonhuman animals. The only motivation I can think of for doing this would be to compromise on my moral principles for the sake of the pleasure associated with eating meat, which is untenable to a mind wired the way mine is.
I think the objection comes from the seeming asymmetry between over-attributing and under-attributing consciousness. Itâs fine to discuss our independent impressions about some topic, but when oneâs view is a minority position and the consequences of false beliefs are high, isnât there some obligation of epistemic humility?
Disagreed, animal moral patienthood competes with all the other possible interventions effective altruists could be doing, and does so symmetrically (the opportunity cost cuts in both directions!).
Itâs frustrating to read comments like this because they make me feel like, if I happen agree with Eliezer about something, my own agency and ability to think critically is being questioned before Iâve even joined the object-level discussion.
Separately, this comment makes a bunch of mostly-implicit object-level assertions about animal welfare and its importance, and a bunch of mostly-explicit assertions about Eliezerâs opinions and influence on rationalists and EAs, as well as the effect of this influence on the impacts of TAI.
None of these claims are directly supported in the comment, which is fine if you donât want to argue for them here, but the way the comment is written might lead readers who agree with the implicit claims about the animal welfare issues to accept the explict claims about Eliezerâs influence and opinions and their effects on TAI with a less critical eye than if these claims were otherwise more clearly separated.
For example, I donât think itâs true that a few FB posts /â comments have had a âhuge influenceâ on rationalist culture. I also think that worrying about animal welfare specifically when thinking about TAI outcomes is less important than you claim. If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fineâso will everyone else. At a minimum, there will also be no more global poverty, no more malaria, and no more animal suffering. Even if the specific humans who develop TAI donât care at all about animals themselves (not exactly likely), they are unlikely to completely ignore the concerns of everyone else who does care. But none of these disagreements have much or any bearing on whether I think animal suffering is real (I find this at least plausible) and whether thatâs a moral horror (I think this is very likely, if the suffering is real).
If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fineâso will everyone else
Iâm not personally convinced fwiw; this line of reasoning has some plausibility but feels extremely out-of-line with approximately every reasonable reference class TAI could be in.
I apologize for phrasing my comment in a way that made you feel like that. I certainly didnât mean to insinuate that rationalists lack âagency and ability to think criticallyââI actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezerâs writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I donât believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Hereâs the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and itâs corroborated here by many disinterested parties.â
Eliezerâs views on animal welfare have had significant influence on views of animal welfare in rationalist cultureâ (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezerâs views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience Iâve had engaging with rationalists and reading LessWrong, where Iâve just seen rationalists reflecting Eliezerâs views on many domains other than ârationality: A-Zâ over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isnât the only influential EA/ârationalist who believes this, and he didnât originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: âtwo of the worldâs prominent A.I. labs â organizations that are tackling some of the tech industryâs most ambitious and potentially powerful projects â grew out of the Rationalist movement...Elon Musk â who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment â founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.â
Sam Altman:âcertainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etcâ.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimismâespecially about the prospects for animalsâare serious enough that having TAI steerers care about animals is very important.
Thank you. I donât have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezerâs views on pretty much every other topic were more influential, rather than less, inside those places.
My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims âEliezerâs views are overly influentialâ and âEliezerâs views are incorrect /â harmfulâ are combined into one big attack. I donât object to people making these claims in principle (though I think theyâre both wrong, in many cases), but when they are combined it requires more effort to separate and refute.
(Your comment wasnât a particularly bad example of this pattern, but it was short and crisp and I didnât have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)
animal consciousness is especially frustrating because of the real harm itâs caused to rationalistsâ openness to caring about animal welfare.
I think you might be greatly overestimating Eliezerâs influence on this.
According to Wikipedia: âIn a 2014 survey of 406 US philosophy professors, approximately 60% of ethicists and 45% of non-ethicist philosophers said it was at least somewhat âmorally badâ to eat meat from mammals. A 2020 survey of 1812 published English-language philosophers found that 48% said it was permissible to eat animals in ordinary circumstances, while 45% said it was not.â
It really does not surprise me that people who give great importance to rationality value animals much less than the median EA, given that non-human animals probably lack most kinds of advanced meta-level thinking and might plausibly not be âaware of their own awarenessâ.
Even in EA, there are many great independent thinkers who are uncertain about whether animals should be members of the âmoral communityâ
I think that sometimes in EA we risk forgetting how fringe veganism is, and I donât think Yudkowskyâs arguments on the importance of animal suffering influence a lot of the views in the rationalist community on the subject. Especially considering people at leading AI labs that might steer TAI, they seem to be very independent thinkers and often critical of Yudkowskyâs arguments (otherwise they wouldnât be working at leading AI labs in the first place)
For what itâs worth, both Holden and Jeff express considerable moral uncertainty regarding animals, while Eliezer does not. Continuing Holdenâs quote:
My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I donât think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I donât want us to pass up the potentially considerable opportunities to improve their welfare.
I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.
I agree with you that itâs quite difficult to quantify how much Eliezerâs views on animals have influenced the rationalist community and those who could steer TAI. However, I think itâs significantâif Eliezer were a staunch animal activist, I think the discourse surrounding animal welfare in the rationalist community would be different. I elaborate upon why I think this in my reply to Max H.
Eliezerâs perspective on animal consciousness is especially frustrating because of the real harm itâs caused to rationalistsâ openness to caring about animal welfare.
Rationalists are much more likely than highly engaged EAs to either dismiss animal welfare outright, or just not think about it since AI x-risk is âobviouslyâ more important. (For a case study, just look at how this authorâs post on fish farming was received between the EA Forum and LessWrong.) Eliezer-style arguments about the âimplausibilityâ of animal suffering abound. Discussions of the implications of AI outcomes on farmed or wild animals (i.e. almost all currently existing sentient beings) are few and far between.
Unlike Eliezerâs overconfidence in physicalism and FDT, Eliezerâs overconfidence in animals not mattering has serious real-world effects. Eliezerâs views have huge influence on rationalist culture, which has significant influence on those who could steer future TAI. If the alignment problem will be solved, itâll be really important for those who steer future TAI to care about animals, and be motivated to use TAI to improve animal welfare.
I would very much prefer it if one didnât appeal to the consequences of the belief about animal moral patienthood, and instead argue whether animals in fact are moral patients or not, or whether the question is well-posed.
For this reason, I have strong-downvoted your comment.
Thanks for describing your reasons. My criterion for moral patienthood is described by this Brian Tomasik quote:
Many other criteria for moral patienthood which exclude animals have been proposed. These criteria always suffer from some combination of the following:
Arbitrariness. For example, âhuman DNA is the criterion for moral patienthoodâ is just as arbitrary as âEuropean DNA is the criterion for moral patienthoodâ.
Exclusion of some humans. For example, âhigh intelligence is the criterion for moral patienthoodâ excludes people who have severe mental disabilities.
Exclusion of hypothetical beings. For example, âhuman DNA is the criterion for moral patienthoodâ would exclude superintelligent aliens and intelligent conscious AI. Also, if some people you know were unknowingly members of a species which looked/âacted much like humans but had very different DNA, they would suddenly become morally valueless.
Collapsing to sociopathy or nihilism. For example, âanimals donât have moral patienthood because we have power over themâ is just nihilism, and if a person used that justification to act the way we do towards farmed animals towards other humans, theyâd be locked up.
The most parsimonious definition of moral patient Iâve seen proposed is just âa sentient beingâ. I donât see any reason why I should add complexity to that definition in order to exclude nonhuman animals. The only motivation I can think of for doing this would be to compromise on my moral principles for the sake of the pleasure associated with eating meat, which is untenable to a mind wired the way mine is.
I think the objection comes from the seeming asymmetry between over-attributing and under-attributing consciousness. Itâs fine to discuss our independent impressions about some topic, but when oneâs view is a minority position and the consequences of false beliefs are high, isnât there some obligation of epistemic humility?
Disagreed, animal moral patienthood competes with all the other possible interventions effective altruists could be doing, and does so symmetrically (the opportunity cost cuts in both directions!).
Itâs frustrating to read comments like this because they make me feel like, if I happen agree with Eliezer about something, my own agency and ability to think critically is being questioned before Iâve even joined the object-level discussion.
Separately, this comment makes a bunch of mostly-implicit object-level assertions about animal welfare and its importance, and a bunch of mostly-explicit assertions about Eliezerâs opinions and influence on rationalists and EAs, as well as the effect of this influence on the impacts of TAI.
None of these claims are directly supported in the comment, which is fine if you donât want to argue for them here, but the way the comment is written might lead readers who agree with the implicit claims about the animal welfare issues to accept the explict claims about Eliezerâs influence and opinions and their effects on TAI with a less critical eye than if these claims were otherwise more clearly separated.
For example, I donât think itâs true that a few FB posts /â comments have had a âhuge influenceâ on rationalist culture. I also think that worrying about animal welfare specifically when thinking about TAI outcomes is less important than you claim. If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fineâso will everyone else. At a minimum, there will also be no more global poverty, no more malaria, and no more animal suffering. Even if the specific humans who develop TAI donât care at all about animals themselves (not exactly likely), they are unlikely to completely ignore the concerns of everyone else who does care. But none of these disagreements have much or any bearing on whether I think animal suffering is real (I find this at least plausible) and whether thatâs a moral horror (I think this is very likely, if the suffering is real).
Iâm not personally convinced fwiw; this line of reasoning has some plausibility but feels extremely out-of-line with approximately every reasonable reference class TAI could be in.
I apologize for phrasing my comment in a way that made you feel like that. I certainly didnât mean to insinuate that rationalists lack âagency and ability to think criticallyââI actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezerâs writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I donât believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Hereâs the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and itâs corroborated here by many disinterested parties.â
Eliezerâs views on animal welfare have had significant influence on views of animal welfare in rationalist cultureâ (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezerâs views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience Iâve had engaging with rationalists and reading LessWrong, where Iâve just seen rationalists reflecting Eliezerâs views on many domains other than ârationality: A-Zâ over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isnât the only influential EA/ârationalist who believes this, and he didnât originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: âtwo of the worldâs prominent A.I. labs â organizations that are tackling some of the tech industryâs most ambitious and potentially powerful projects â grew out of the Rationalist movement...Elon Musk â who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment â founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.â
Sam Altman:âcertainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etcâ.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimismâespecially about the prospects for animalsâare serious enough that having TAI steerers care about animals is very important.
Thank you. I donât have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezerâs views on pretty much every other topic were more influential, rather than less, inside those places.
My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims âEliezerâs views are overly influentialâ and âEliezerâs views are incorrect /â harmfulâ are combined into one big attack. I donât object to people making these claims in principle (though I think theyâre both wrong, in many cases), but when they are combined it requires more effort to separate and refute.
(Your comment wasnât a particularly bad example of this pattern, but it was short and crisp and I didnât have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)
I think you might be greatly overestimating Eliezerâs influence on this.
According to Wikipedia: âIn a 2014 survey of 406 US philosophy professors, approximately 60% of ethicists and 45% of non-ethicist philosophers said it was at least somewhat âmorally badâ to eat meat from mammals. A 2020 survey of 1812 published English-language philosophers found that 48% said it was permissible to eat animals in ordinary circumstances, while 45% said it was not.â
It really does not surprise me that people who give great importance to rationality value animals much less than the median EA, given that non-human animals probably lack most kinds of advanced meta-level thinking and might plausibly not be âaware of their own awarenessâ.
Even in EA, there are many great independent thinkers who are uncertain about whether animals should be members of the âmoral communityâ
I think that sometimes in EA we risk forgetting how fringe veganism is, and I donât think Yudkowskyâs arguments on the importance of animal suffering influence a lot of the views in the rationalist community on the subject. Especially considering people at leading AI labs that might steer TAI, they seem to be very independent thinkers and often critical of Yudkowskyâs arguments (otherwise they wouldnât be working at leading AI labs in the first place)
For what itâs worth, both Holden and Jeff express considerable moral uncertainty regarding animals, while Eliezer does not. Continuing Holdenâs quote:
I agree with you that itâs quite difficult to quantify how much Eliezerâs views on animals have influenced the rationalist community and those who could steer TAI. However, I think itâs significantâif Eliezer were a staunch animal activist, I think the discourse surrounding animal welfare in the rationalist community would be different. I elaborate upon why I think this in my reply to Max H.