I apologize for phrasing my comment in a way that made you feel like that. I certainly didnât mean to insinuate that rationalists lack âagency and ability to think criticallyââI actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezerâs writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I donât believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Hereâs the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and itâs corroborated here by many disinterested parties.â
Eliezerâs views on animal welfare have had significant influence on views of animal welfare in rationalist cultureâ (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezerâs views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience Iâve had engaging with rationalists and reading LessWrong, where Iâve just seen rationalists reflecting Eliezerâs views on many domains other than ârationality: A-Zâ over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isnât the only influential EA/ârationalist who believes this, and he didnât originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: âtwo of the worldâs prominent A.I. labs â organizations that are tackling some of the tech industryâs most ambitious and potentially powerful projects â grew out of the Rationalist movement...Elon Musk â who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment â founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.â
Sam Altman:âcertainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etcâ.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimismâespecially about the prospects for animalsâare serious enough that having TAI steerers care about animals is very important.
Thank you. I donât have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezerâs views on pretty much every other topic were more influential, rather than less, inside those places.
My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims âEliezerâs views are overly influentialâ and âEliezerâs views are incorrect /â harmfulâ are combined into one big attack. I donât object to people making these claims in principle (though I think theyâre both wrong, in many cases), but when they are combined it requires more effort to separate and refute.
(Your comment wasnât a particularly bad example of this pattern, but it was short and crisp and I didnât have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)
I apologize for phrasing my comment in a way that made you feel like that. I certainly didnât mean to insinuate that rationalists lack âagency and ability to think criticallyââI actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezerâs writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I donât believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Hereâs the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and itâs corroborated here by many disinterested parties.â
Eliezerâs views on animal welfare have had significant influence on views of animal welfare in rationalist cultureâ (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezerâs views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience Iâve had engaging with rationalists and reading LessWrong, where Iâve just seen rationalists reflecting Eliezerâs views on many domains other than ârationality: A-Zâ over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isnât the only influential EA/ârationalist who believes this, and he didnât originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: âtwo of the worldâs prominent A.I. labs â organizations that are tackling some of the tech industryâs most ambitious and potentially powerful projects â grew out of the Rationalist movement...Elon Musk â who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment â founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.â
Sam Altman:âcertainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etcâ.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimismâespecially about the prospects for animalsâare serious enough that having TAI steerers care about animals is very important.
Thank you. I donât have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezerâs views on pretty much every other topic were more influential, rather than less, inside those places.
My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims âEliezerâs views are overly influentialâ and âEliezerâs views are incorrect /â harmfulâ are combined into one big attack. I donât object to people making these claims in principle (though I think theyâre both wrong, in many cases), but when they are combined it requires more effort to separate and refute.
(Your comment wasnât a particularly bad example of this pattern, but it was short and crisp and I didnât have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)