It’s frustrating to read comments like this because they make me feel like, if I happen agree with Eliezer about something, my own agency and ability to think critically is being questioned before I’ve even joined the object-level discussion.
Separately, this comment makes a bunch of mostly-implicit object-level assertions about animal welfare and its importance, and a bunch of mostly-explicit assertions about Eliezer’s opinions and influence on rationalists and EAs, as well as the effect of this influence on the impacts of TAI.
None of these claims are directly supported in the comment, which is fine if you don’t want to argue for them here, but the way the comment is written might lead readers who agree with the implicit claims about the animal welfare issues to accept the explict claims about Eliezer’s influence and opinions and their effects on TAI with a less critical eye than if these claims were otherwise more clearly separated.
For example, I don’t think it’s true that a few FB posts / comments have had a “huge influence” on rationalist culture. I also think that worrying about animal welfare specifically when thinking about TAI outcomes is less important than you claim. If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fine—so will everyone else. At a minimum, there will also be no more global poverty, no more malaria, and no more animal suffering. Even if the specific humans who develop TAI don’t care at all about animals themselves (not exactly likely), they are unlikely to completely ignore the concerns of everyone else who does care. But none of these disagreements have much or any bearing on whether I think animal suffering is real (I find this at least plausible) and whether that’s a moral horror (I think this is very likely, if the suffering is real).
If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fine—so will everyone else
I’m not personally convinced fwiw; this line of reasoning has some plausibility but feels extremely out-of-line with approximately every reasonable reference class TAI could be in.
I apologize for phrasing my comment in a way that made you feel like that. I certainly didn’t mean to insinuate that rationalists lack “agency and ability to think critically”—I actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezer’s writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I don’t believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Here’s the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and it’s corroborated here by many disinterested parties.”
Eliezer’s views on animal welfare have had significant influence on views of animal welfare in rationalist culture” (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezer’s views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience I’ve had engaging with rationalists and reading LessWrong, where I’ve just seen rationalists reflecting Eliezer’s views on many domains other than “rationality: A-Z” over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isn’t the only influential EA/rationalist who believes this, and he didn’t originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: “two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement...Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.”
Sam Altman:”certainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc”.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimism—especially about the prospects for animals—are serious enough that having TAI steerers care about animals is very important.
Thank you. I don’t have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezer’s views on pretty much every other topic were more influential, rather than less, inside those places.
My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims “Eliezer’s views are overly influential” and “Eliezer’s views are incorrect / harmful” are combined into one big attack. I don’t object to people making these claims in principle (though I think they’re both wrong, in many cases), but when they are combined it requires more effort to separate and refute.
(Your comment wasn’t a particularly bad example of this pattern, but it was short and crisp and I didn’t have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)
It’s frustrating to read comments like this because they make me feel like, if I happen agree with Eliezer about something, my own agency and ability to think critically is being questioned before I’ve even joined the object-level discussion.
Separately, this comment makes a bunch of mostly-implicit object-level assertions about animal welfare and its importance, and a bunch of mostly-explicit assertions about Eliezer’s opinions and influence on rationalists and EAs, as well as the effect of this influence on the impacts of TAI.
None of these claims are directly supported in the comment, which is fine if you don’t want to argue for them here, but the way the comment is written might lead readers who agree with the implicit claims about the animal welfare issues to accept the explict claims about Eliezer’s influence and opinions and their effects on TAI with a less critical eye than if these claims were otherwise more clearly separated.
For example, I don’t think it’s true that a few FB posts / comments have had a “huge influence” on rationalist culture. I also think that worrying about animal welfare specifically when thinking about TAI outcomes is less important than you claim. If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fine—so will everyone else. At a minimum, there will also be no more global poverty, no more malaria, and no more animal suffering. Even if the specific humans who develop TAI don’t care at all about animals themselves (not exactly likely), they are unlikely to completely ignore the concerns of everyone else who does care. But none of these disagreements have much or any bearing on whether I think animal suffering is real (I find this at least plausible) and whether that’s a moral horror (I think this is very likely, if the suffering is real).
I’m not personally convinced fwiw; this line of reasoning has some plausibility but feels extremely out-of-line with approximately every reasonable reference class TAI could be in.
I apologize for phrasing my comment in a way that made you feel like that. I certainly didn’t mean to insinuate that rationalists lack “agency and ability to think critically”—I actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezer’s writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I don’t believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Here’s the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and it’s corroborated here by many disinterested parties.”
Eliezer’s views on animal welfare have had significant influence on views of animal welfare in rationalist culture” (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezer’s views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience I’ve had engaging with rationalists and reading LessWrong, where I’ve just seen rationalists reflecting Eliezer’s views on many domains other than “rationality: A-Z” over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isn’t the only influential EA/rationalist who believes this, and he didn’t originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: “two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement...Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.”
Sam Altman:”certainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc”.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimism—especially about the prospects for animals—are serious enough that having TAI steerers care about animals is very important.
Thank you. I don’t have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezer’s views on pretty much every other topic were more influential, rather than less, inside those places.
My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims “Eliezer’s views are overly influential” and “Eliezer’s views are incorrect / harmful” are combined into one big attack. I don’t object to people making these claims in principle (though I think they’re both wrong, in many cases), but when they are combined it requires more effort to separate and refute.
(Your comment wasn’t a particularly bad example of this pattern, but it was short and crisp and I didn’t have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)