the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
[note: i only swapped the order of humans/animals. my mind predicts that, at least without this text, this statement, but not the quoted one, would elicit negative reactions or be perceived as uncivil, despite the symmetry, because this kind of rhetoric is only normal/socially acceptable in the original case.]
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species, otherwise you end up unjustly giving all that epistemic weight to whatever might-makes-right coalition takes over the planet / excludes others from ‘the public’ (such as by locking the outgroup in factory slaughter facilities, or extermination camps, or enslaving them), because only their dominant morality is being perceived.
You can counter with a lot of math that checks out and arguments that make logical sense
this seems to characterize the split as: supporting humans comes from empathy, supporting animal minds comes from ‘cold logic and math’. but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
Yes, of course. My point isn’t that they are right though. Chickens can’t become EAs. Only humans can. My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal, abstract goal. To the extent that I think it should inform our priors, we cannot actually canvas the opinions of chickens or other species to get their moralities. We could infer it, but this would be us imagining what they would think, and speculative. I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
Yes, many feel empathy for animals, myself included. I should point out that I am not advocating for ignoring animal suffering. If it were up to me, I’d probably allocate the funds by splitting them evenly between global health and animal welfare, as a kind of diversified portfolio strategy of cause areas. I consider that the more principled way of handling the grave uncertainty that suffering estimates without clear confidence intervals entails to me. Note that even this would be a significant increase in relative allocation to animal welfare compared to the current situation.
My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
That’s not the position I was responding to. Here is what you wrote:
It’s fair to point out that the majority has been wrong historically many times. I’m not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I’m uncertain, that “asking the audience” or democracy seem like a good way to use the “wisdom of crowds” effect to get a relatively good prior.
That seems like you’re proposing actually giving epistemic weight to the beliefs of the public, not just { pretending to have the views of normal humans, possibly only during outreach }. My response is to that.
From your current comment:
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal
Epistemic (and related terms you used, like priors) are about how you form beliefs about what is true. They are not about how you should act, so there cannot be an ‘epistemic compromise with the human public’ in the sense you wrote—that would instead be called, ‘pretending to have beliefs closer to theirs, to persuade them to join our cause’. To say you meant the latter thing by ‘epistemic weight’ seems like a definitional retreat to me: changing the definition of some term to make it seem like one meant something different all along.
(Some humans perform definitional retreats without knowing it, typically when their real position is not actually pinned down internally and they’re coming up with arguments on the spot that are a compromise between some internal sentiment and what others appear to want them to believe. But in the intentional case, this would be dishonest.)
I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative.
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much. I’m not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong and the truth is some secret third thing. It’s basic epistemic humility to agree that we all have working but probably wrong models of the world.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn’t have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part.
I think the most probable reason I worded it that way was that I felt that it wasn’t ideal to only give weight to the popular morality of the dominant coalition, which you pointed out the injustice of. Ideally, we should canvas everyone, but because we can’t canvas the chickens, it is a compromise in that sense.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent.
Thank you for acknowledging that.
Considering or trying on different arguments is good, but I’d suggest doing it explicitly. For example, instead of “I meant X, not Y” (unless that’s true), “How about new-argument X?” is a totally valid thing to say, even if having (or appearing to have) pinned-down beliefs might be higher status or something.
Some object-level responses:
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much.
This sounds like it’s saying: “to make it easier to recruit others, our beliefs should genuinely be closer to theirs.” I agree that would not entail lying about one’s beliefs to the public, but I think that would require EAs lying to themselves[1] to make their beliefs genuinely closer to what’s popular.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
I don’t think EAs should (somehow subtly) lie to themselves. If I imagine the EA which does this, it’s actually really scary, in ways I find hard to articulate.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong
Sure, there can be epistemic compromise in that other sense, where you know there’s some probability of your reasoning being incorrect, or where you have no reason to expect yourself to be correct over someone who is as good at reasoning and also trying to form correct beliefs.
But it’s not something done because ‘we need popular support to get things done’.
Many apparent cognitive biases can be explained by a strong desire to look good and a limited ability to lie; in general, our conscious beliefs don’t seem to be exclusively or even mostly optimized to track reality. If we take this view seriously, I think it has significant implications for how we ought to reason and behave.
Yeah, I should probably retract the “we need popular support to get things done” line of reasoning.
I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is “plausible” and then give it some conditional weight, a kind of “humour the idea and give it the benefit of the doubt”. It’s kind of a technicality thing that I do because I’m personally very against outright lying, so I’ve developed a kind of alternative way of fudging to avoid hurt feelings and such.
This is likely related to the “spin” concept that I adopted from political debates. The idea of “spin” to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It’s something of a habit, and most probably epistemically highly questionable and something I should stop doing.
I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you’d been pessimistic you’d have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
that is a form of evidence. if people’s beliefs all had some truly-independent probability of being correct, then in a large society it would become extreme evidence for any belief that >50% of people have, but it’s not actually true that people’s beliefs are independent.
human minds are similar, and human cultural environments are similar. often people’s conclusions aren’t actually independent, and often they’re not actually conclusions but are unquestioned beliefs internalized from their environment (parents, peers, etc). often people make the same logical mistakes, because they are similar entities (humans).
you still have to reason about that premise, “peoples conclusions about <subject> are independent”, as you would any other belief.
and there are known ways large groups of humans can internalize the same beliefs, with detectable signs like ‘becoming angry when the idea is questioned’.
(maybe usually humans will be right, because most beliefs are about low level mundane things like ‘it will be day tomorrow’. but the cases where we’d like to have such a prior are exactly those non-mundane special cases where human consensus can easily be wrong.)
Oh, you edited your comment while I was writing my initial response to it.
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
We can infer their preferences not to suffer, but we can’t know what their “morality” is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don’t pretend to know this.
It’s getting late in my time zone, and I’m getting sleepy, so I may not reply right away to future comments.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
[note: i only swapped the order of humans/animals. my mind predicts that, at least without this text, this statement, but not the quoted one, would elicit negative reactions or be perceived as uncivil, despite the symmetry, because this kind of rhetoric is only normal/socially acceptable in the original case.]
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species, otherwise you end up unjustly giving all that epistemic weight to whatever might-makes-right coalition takes over the planet / excludes others from ‘the public’ (such as by locking the outgroup in factory slaughter facilities, or extermination camps, or enslaving them), because only their dominant morality is being perceived.
otherwise, said weight would be distributed in a way which is inclusive of animals (or nazi-targeted groups, or enslaved people, in the case of those aforementioned moral catastrophes).
this seems to characterize the split as: supporting humans comes from empathy, supporting animal minds comes from ‘cold logic and math’. but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
(to be clear, i don’t agree, this is just a separate point)
Yes, of course. My point isn’t that they are right though. Chickens can’t become EAs. Only humans can. My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal, abstract goal. To the extent that I think it should inform our priors, we cannot actually canvas the opinions of chickens or other species to get their moralities. We could infer it, but this would be us imagining what they would think, and speculative. I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
Yes, many feel empathy for animals, myself included. I should point out that I am not advocating for ignoring animal suffering. If it were up to me, I’d probably allocate the funds by splitting them evenly between global health and animal welfare, as a kind of diversified portfolio strategy of cause areas. I consider that the more principled way of handling the grave uncertainty that suffering estimates without clear confidence intervals entails to me. Note that even this would be a significant increase in relative allocation to animal welfare compared to the current situation.
That’s not the position I was responding to. Here is what you wrote:
That seems like you’re proposing actually giving epistemic weight to the beliefs of the public, not just { pretending to have the views of normal humans, possibly only during outreach }. My response is to that.
From your current comment:
Epistemic (and related terms you used, like priors) are about how you form beliefs about what is true. They are not about how you should act, so there cannot be an ‘epistemic compromise with the human public’ in the sense you wrote—that would instead be called, ‘pretending to have beliefs closer to theirs, to persuade them to join our cause’. To say you meant the latter thing by ‘epistemic weight’ seems like a definitional retreat to me: changing the definition of some term to make it seem like one meant something different all along.
(Some humans perform definitional retreats without knowing it, typically when their real position is not actually pinned down internally and they’re coming up with arguments on the spot that are a compromise between some internal sentiment and what others appear to want them to believe. But in the intentional case, this would be dishonest.)
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative.
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much. I’m not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong and the truth is some secret third thing. It’s basic epistemic humility to agree that we all have working but probably wrong models of the world.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn’t have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part.
I think the most probable reason I worded it that way was that I felt that it wasn’t ideal to only give weight to the popular morality of the dominant coalition, which you pointed out the injustice of. Ideally, we should canvas everyone, but because we can’t canvas the chickens, it is a compromise in that sense.
Thank you for acknowledging that.
Considering or trying on different arguments is good, but I’d suggest doing it explicitly. For example, instead of “I meant X, not Y” (unless that’s true), “How about new-argument X?” is a totally valid thing to say, even if having (or appearing to have) pinned-down beliefs might be higher status or something.
Some object-level responses:
This sounds like it’s saying: “to make it easier to recruit others, our beliefs should genuinely be closer to theirs.” I agree that would not entail lying about one’s beliefs to the public, but I think that would require EAs lying to themselves[1] to make their beliefs genuinely closer to what’s popular.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
I don’t think EAs should (somehow subtly) lie to themselves. If I imagine the EA which does this, it’s actually really scary, in ways I find hard to articulate.
Sure, there can be epistemic compromise in that other sense, where you know there’s some probability of your reasoning being incorrect, or where you have no reason to expect yourself to be correct over someone who is as good at reasoning and also trying to form correct beliefs.
But it’s not something done because ‘we need popular support to get things done’.
this reminded me of this: If we can’t lie to others, we will lie to ourselves by Paul Christiano.
Yeah, I should probably retract the “we need popular support to get things done” line of reasoning.
I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is “plausible” and then give it some conditional weight, a kind of “humour the idea and give it the benefit of the doubt”. It’s kind of a technicality thing that I do because I’m personally very against outright lying, so I’ve developed a kind of alternative way of fudging to avoid hurt feelings and such.
This is likely related to the “spin” concept that I adopted from political debates. The idea of “spin” to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It’s something of a habit, and most probably epistemically highly questionable and something I should stop doing.
I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you’d been pessimistic you’d have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation.
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
that is a form of evidence. if people’s beliefs all had some truly-independent probability of being correct, then in a large society it would become extreme evidence for any belief that >50% of people have, but it’s not actually true that people’s beliefs are independent.
human minds are similar, and human cultural environments are similar. often people’s conclusions aren’t actually independent, and often they’re not actually conclusions but are unquestioned beliefs internalized from their environment (parents, peers, etc). often people make the same logical mistakes, because they are similar entities (humans).
you still have to reason about that premise, “peoples conclusions about <subject> are independent”, as you would any other belief.
and there are known ways large groups of humans can internalize the same beliefs, with detectable signs like ‘becoming angry when the idea is questioned’.
(maybe usually humans will be right, because most beliefs are about low level mundane things like ‘it will be day tomorrow’. but the cases where we’d like to have such a prior are exactly those non-mundane special cases where human consensus can easily be wrong.)
This answer feels like a very honest reflection on oneself, I like it.
Oh, you edited your comment while I was writing my initial response to it.
We can infer their preferences not to suffer, but we can’t know what their “morality” is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don’t pretend to know this.
It’s getting late in my time zone, and I’m getting sleepy, so I may not reply right away to future comments.
Agreed, I mean that just for this subject of factory farming, it’s tractable to know their preferences.