This is probably going to be downvoted to oblivion, but I feel it’s worth stating anyway, if nothing else to express my frustration with and alienation from EA.
On a meta level, I somewhat worry that the degree to which the animal welfare choice is dominating the global health one kinda shows how seemingly out-of-touch many EAs have become from mainstream common sense morality views.
In particular, I’m reminded of that quote from the Analects of Confucius:
When the stables were burnt down, on returning from court Confucius said, “Was anyone hurt?” He did not ask about the horses.
You can counter with a lot of math that checks out and arguments that make logical sense, but the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.
Maybe the EAs are still technically right and other people are just speciesist, but to me this does not bode well for the movement gaining traction or popular support.
That seems like saying: “Let’s not donate to animal charities because there are people who would donate to the most effective human charities but decide to donate to the less effective human charities when they see people who donate to the most effective human charities switch their donations to animal charities.” Probably I’m not following the logic...
Also: if donating to the top-effective animal charities is +100 times as cost-effective as donating to the top-effective human charities, that backfire effect (people donating to the less effective human charities instead of the top effective human charities) should be very strong: more than 100 people should show this backfire effect (i.e. remain non-EA) per effective altruist who donates to top-effective animal charities. That seems very unlikely to me.
What is the most effective and appropriate relationship with “mainstream common sense morality views” in your opinion? At one extreme, if we just parrot them, then we can just cut out the expensive meta middlemen and give directly to whatever mainstream opinion says we should.
I do think the skew would be meaningfully different but for the significant discrepancy in GW vs AW funding in both EA and more generally.
I don’t know. Certainly just parroting them is wrong. I just think we should give some weight to majority opinion, as it represents an aggregate of many different human experiences that seem to have aligned together and found common ground.
Also, a lot of my worry is not so much that EAs might be wrong, so much as that if our views diverge too strongly from popular opinion, we run the risk of things like negative media coverage (“oh look, those EA cultists are misanthropic too”), and we also are less likely to have successful outreach to people outside of the EA filter bubble.
In particular, we already have a hard time with outreach in China, and this animal welfare emphasis is just going to further alienate them due to cultural differences, as you can probably tell from my Confucius quote. The Analects are taught in school in both China and Taiwan and are a significant influence in Asian societies.
It’s also partly a concern that groupthink dynamics might be at play within EA. I noticed that there are many more comments from the animal welfare crowd, and I fear that many of the global health people might be too intellectually intimidated to voice their views at this point, which would be bad for the debate.
The issue with majority opinion is that 500 years ago, the majority would have thought that most of what we do today is crazy.
I mean, even when I was 17, my opinion was close to the majority opinion (in my country), and I certainly wouldn’t trust it today, because it was simply uninformed.
The risk of alienating other people is a valid concern. I’d be glad to see research to determine the threshold which would allow to maximise for both reach and impactful donations. Beyond what percentage of donations going to animal welfare will the movement get less traction ? 1% ? 90% ? Will people just not care about the raw numbers and maybe more about something else ?
For the groupthink point, I’m not sure if anything can be done. I’d be glad to read from people who think more donations should go to GHD (they can do it with an anonymous account as well). But your initial post got 21 karma, which makes it in the top 5 comments of the page, so I think there is potential for civil discussion here.
It’s fair to point out that the majority has been wrong historically many times. I’m not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I’m uncertain, that “asking the audience” or democracy seem like a good way to use the “wisdom of crowds” effect to get a relatively good prior.
I’m actually quite surprised by how quickly and how much that post has been upvoted. This definitely makes me update my priors positively about how receptive the forums are to contrarian viewpoints and civil debate. At least, I’m feeling less negativity than when I wrote that post.
Regarding the majority vote, I think “asking the audience” is not a good recipe when the audience is not very informed, which seems to be the case here (where would they get the information without much personal research?)
I understand trusting the wisdom of the crowds in situations where people reasonably understand the situation (to take a classic example, guessing the weight of a pig). However, most people here likely have little information about all the different ways animals are suffering, the scale, research about sentience, knowledge about scope insensitivity, and arguments in favour of things like speciesm. Which makes sense! Not everybody is looking at it deeply.
But this doesn’t provide a very good context for relying on the wisdom of the crowd.
One could also consider the general EA / EA-adjacent sentiment over time as a cross-check on the risk of current groupthink. Of course, later EAs could be responding to better evidence not available to earlier EAs. But I would also consider the possibility of changes in other factors (like perceived status, available funding for EAs, perceived lack of novel opportunities in a mature cause area that has strong interventions with near-limitless room for more funding) playing a major role.
I I think this is an interesting dilemma, and I am sympathetic to some extent (even as an animal rights activist). At the heart of your concern are 3 things:
Being too radical risks losing popular support
Being too radical risks being wrong and causing more harm than good
How do we decide what ethical system is right or preferable without resorting to power or arbitrariness?
I think in this case, 2) is of lesser concern. It does seem like adults tend to give far more weight to humans than animals (a majority of a sample would save 1 human over 100 dogs), though interestingly children seem to be much less speciesist (Wilks et al., 2020). But I think we have good reasons to give substantial moral weight to animals. Given that animals have central nervous systems and nociceptors like we do, and given that we evolved from a long lineage of animals, we should assume that we inherited our ability to suffer from our evolutionary ancestors rather than uniquely developing it ourselves. Then there’s evidence, such as (if I remember correctly) that animals will trade off material benefits for analgesics. And I believe the scientific consensus has consistently and overwhelmingly been that animals feel pain. Animals are also in the present and the harms are concrete, so animal rights is not beset by some of the concerns that, say, long-termist causes are. So I think the probability that we will be wrong about animal rights is negligible.
I sympathize with the idea that being too radical risks losing support. I’ve definitely had that feeling myself in the past when I saw animal rights activists who preferred harder tactics, and I still have my disagreements with some of their tactics and ideas. But I’ve come to see the value in taking a bolder stance as well. From my experience (yes, on a college campus, but still), many people are surprisingly willing to engage with discussions about animal rights and about personally going vegan. Some are even thankful or later go on to join us in our efforts to advocate for animals. I think for many, it’s a matter of educating them about factory farming, confronting them with the urgency of the problem, and giving them space to reflect on their values. And even if you don’t believe in the most extreme tactics, I think it’s hard to defend not advocating for animal rights at all. Just a few centuries ago, slavery was still widely accepted and practiced, and abolitionism was a minority opinion which often received derision and even threats of harm. The work of abolitionists was nevertheless instrumental in getting society to change its attitudes and its ways such that the average person today (at least in the West) would find slavery abhorrent. Indeed, people would roundly agree that slavery is wrong even if they were told to imagine that the enslaved person’s welfare increased due to their slavery (based on a philosophy class I took years ago). To make progress toward the good, society needs people who will go against the current majority.
And this may lead to the final question of how we decide what is right and what is wrong. This I have no rigorous answer to. We are trapped between the Scylla of dogmatism and the Charybdis of relativism. Here I can only echo the point I made above. I agree that we must give some weight to the majority morality, and that to immediately jump ten steps ahead of where we are is impractical and perhaps dangerous. But to veer too far into ossification and blind traditionalism is perhaps equally dangerous. I believe we must continue the movement and the process towards greater morality as best we can, because we see how atrocious the morality of the past has been and the evidence that the morality of the present is still far from acceptable.
the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
[note: i only swapped the order of humans/animals. my mind predicts that, at least without this text, this statement, but not the quoted one, would elicit negative reactions or be perceived as uncivil, despite the symmetry, because this kind of rhetoric is only normal/socially acceptable in the original case.]
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species, otherwise you end up unjustly giving all that epistemic weight to whatever might-makes-right coalition takes over the planet / excludes others from ‘the public’ (such as by locking the outgroup in factory slaughter facilities, or extermination camps, or enslaving them), because only their dominant morality is being perceived.
You can counter with a lot of math that checks out and arguments that make logical sense
this seems to characterize the split as: supporting humans comes from empathy, supporting animal minds comes from ‘cold logic and math’. but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
Yes, of course. My point isn’t that they are right though. Chickens can’t become EAs. Only humans can. My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal, abstract goal. To the extent that I think it should inform our priors, we cannot actually canvas the opinions of chickens or other species to get their moralities. We could infer it, but this would be us imagining what they would think, and speculative. I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
Yes, many feel empathy for animals, myself included. I should point out that I am not advocating for ignoring animal suffering. If it were up to me, I’d probably allocate the funds by splitting them evenly between global health and animal welfare, as a kind of diversified portfolio strategy of cause areas. I consider that the more principled way of handling the grave uncertainty that suffering estimates without clear confidence intervals entails to me. Note that even this would be a significant increase in relative allocation to animal welfare compared to the current situation.
My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
That’s not the position I was responding to. Here is what you wrote:
It’s fair to point out that the majority has been wrong historically many times. I’m not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I’m uncertain, that “asking the audience” or democracy seem like a good way to use the “wisdom of crowds” effect to get a relatively good prior.
That seems like you’re proposing actually giving epistemic weight to the beliefs of the public, not just { pretending to have the views of normal humans, possibly only during outreach }. My response is to that.
From your current comment:
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal
Epistemic (and related terms you used, like priors) are about how you form beliefs about what is true. They are not about how you should act, so there cannot be an ‘epistemic compromise with the human public’ in the sense you wrote—that would instead be called, ‘pretending to have beliefs closer to theirs, to persuade them to join our cause’. To say you meant the latter thing by ‘epistemic weight’ seems like a definitional retreat to me: changing the definition of some term to make it seem like one meant something different all along.
(Some humans perform definitional retreats without knowing it, typically when their real position is not actually pinned down internally and they’re coming up with arguments on the spot that are a compromise between some internal sentiment and what others appear to want them to believe. But in the intentional case, this would be dishonest.)
I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative.
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much. I’m not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong and the truth is some secret third thing. It’s basic epistemic humility to agree that we all have working but probably wrong models of the world.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn’t have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part.
I think the most probable reason I worded it that way was that I felt that it wasn’t ideal to only give weight to the popular morality of the dominant coalition, which you pointed out the injustice of. Ideally, we should canvas everyone, but because we can’t canvas the chickens, it is a compromise in that sense.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent.
Thank you for acknowledging that.
Considering or trying on different arguments is good, but I’d suggest doing it explicitly. For example, instead of “I meant X, not Y” (unless that’s true), “How about new-argument X?” is a totally valid thing to say, even if having (or appearing to have) pinned-down beliefs might be higher status or something.
Some object-level responses:
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much.
This sounds like it’s saying: “to make it easier to recruit others, our beliefs should genuinely be closer to theirs.” I agree that would not entail lying about one’s beliefs to the public, but I think that would require EAs lying to themselves[1] to make their beliefs genuinely closer to what’s popular.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
I don’t think EAs should (somehow subtly) lie to themselves. If I imagine the EA which does this, it’s actually really scary, in ways I find hard to articulate.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong
Sure, there can be epistemic compromise in that other sense, where you know there’s some probability of your reasoning being incorrect, or where you have no reason to expect yourself to be correct over someone who is as good at reasoning and also trying to form correct beliefs.
But it’s not something done because ‘we need popular support to get things done’.
Many apparent cognitive biases can be explained by a strong desire to look good and a limited ability to lie; in general, our conscious beliefs don’t seem to be exclusively or even mostly optimized to track reality. If we take this view seriously, I think it has significant implications for how we ought to reason and behave.
Yeah, I should probably retract the “we need popular support to get things done” line of reasoning.
I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is “plausible” and then give it some conditional weight, a kind of “humour the idea and give it the benefit of the doubt”. It’s kind of a technicality thing that I do because I’m personally very against outright lying, so I’ve developed a kind of alternative way of fudging to avoid hurt feelings and such.
This is likely related to the “spin” concept that I adopted from political debates. The idea of “spin” to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It’s something of a habit, and most probably epistemically highly questionable and something I should stop doing.
I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you’d been pessimistic you’d have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
that is a form of evidence. if people’s beliefs all had some truly-independent probability of being correct, then in a large society it would become extreme evidence for any belief that >50% of people have, but it’s not actually true that people’s beliefs are independent.
human minds are similar, and human cultural environments are similar. often people’s conclusions aren’t actually independent, and often they’re not actually conclusions but are unquestioned beliefs internalized from their environment (parents, peers, etc). often people make the same logical mistakes, because they are similar entities (humans).
you still have to reason about that premise, “peoples conclusions about <subject> are independent”, as you would any other belief.
and there are known ways large groups of humans can internalize the same beliefs, with detectable signs like ‘becoming angry when the idea is questioned’.
(maybe usually humans will be right, because most beliefs are about low level mundane things like ‘it will be day tomorrow’. but the cases where we’d like to have such a prior are exactly those non-mundane special cases where human consensus can easily be wrong.)
Oh, you edited your comment while I was writing my initial response to it.
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
We can infer their preferences not to suffer, but we can’t know what their “morality” is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don’t pretend to know this.
It’s getting late in my time zone, and I’m getting sleepy, so I may not reply right away to future comments.
This is probably going to be downvoted to oblivion, but I feel it’s worth stating anyway, if nothing else to express my frustration with and alienation from EA.
On a meta level, I somewhat worry that the degree to which the animal welfare choice is dominating the global health one kinda shows how seemingly out-of-touch many EAs have become from mainstream common sense morality views.
In particular, I’m reminded of that quote from the Analects of Confucius:
You can counter with a lot of math that checks out and arguments that make logical sense, but the average person on the street is likely to view the idea that you could ever elevate the suffering of any number of chickens above that of even one human child to be abhorrent.
Maybe the EAs are still technically right and other people are just speciesist, but to me this does not bode well for the movement gaining traction or popular support.
Just wanted to get that out of my system.
That seems like saying: “Let’s not donate to animal charities because there are people who would donate to the most effective human charities but decide to donate to the less effective human charities when they see people who donate to the most effective human charities switch their donations to animal charities.” Probably I’m not following the logic...
Also: if donating to the top-effective animal charities is +100 times as cost-effective as donating to the top-effective human charities, that backfire effect (people donating to the less effective human charities instead of the top effective human charities) should be very strong: more than 100 people should show this backfire effect (i.e. remain non-EA) per effective altruist who donates to top-effective animal charities. That seems very unlikely to me.
What is the most effective and appropriate relationship with “mainstream common sense morality views” in your opinion? At one extreme, if we just parrot them, then we can just cut out the expensive meta middlemen and give directly to whatever mainstream opinion says we should.
I do think the skew would be meaningfully different but for the significant discrepancy in GW vs AW funding in both EA and more generally.
I don’t know. Certainly just parroting them is wrong. I just think we should give some weight to majority opinion, as it represents an aggregate of many different human experiences that seem to have aligned together and found common ground.
Also, a lot of my worry is not so much that EAs might be wrong, so much as that if our views diverge too strongly from popular opinion, we run the risk of things like negative media coverage (“oh look, those EA cultists are misanthropic too”), and we also are less likely to have successful outreach to people outside of the EA filter bubble.
In particular, we already have a hard time with outreach in China, and this animal welfare emphasis is just going to further alienate them due to cultural differences, as you can probably tell from my Confucius quote. The Analects are taught in school in both China and Taiwan and are a significant influence in Asian societies.
It’s also partly a concern that groupthink dynamics might be at play within EA. I noticed that there are many more comments from the animal welfare crowd, and I fear that many of the global health people might be too intellectually intimidated to voice their views at this point, which would be bad for the debate.
The issue with majority opinion is that 500 years ago, the majority would have thought that most of what we do today is crazy.
I mean, even when I was 17, my opinion was close to the majority opinion (in my country), and I certainly wouldn’t trust it today, because it was simply uninformed.
The risk of alienating other people is a valid concern. I’d be glad to see research to determine the threshold which would allow to maximise for both reach and impactful donations. Beyond what percentage of donations going to animal welfare will the movement get less traction ? 1% ? 90% ? Will people just not care about the raw numbers and maybe more about something else ?
For the groupthink point, I’m not sure if anything can be done. I’d be glad to read from people who think more donations should go to GHD (they can do it with an anonymous account as well). But your initial post got 21 karma, which makes it in the top 5 comments of the page, so I think there is potential for civil discussion here.
It’s fair to point out that the majority has been wrong historically many times. I’m not saying this should be our final decision procedure and to lock in those values. But we need some kind of decision procedure for things, and I find when I’m uncertain, that “asking the audience” or democracy seem like a good way to use the “wisdom of crowds” effect to get a relatively good prior.
I’m actually quite surprised by how quickly and how much that post has been upvoted. This definitely makes me update my priors positively about how receptive the forums are to contrarian viewpoints and civil debate. At least, I’m feeling less negativity than when I wrote that post.
Regarding the majority vote, I think “asking the audience” is not a good recipe when the audience is not very informed, which seems to be the case here (where would they get the information without much personal research?)
I understand trusting the wisdom of the crowds in situations where people reasonably understand the situation (to take a classic example, guessing the weight of a pig). However, most people here likely have little information about all the different ways animals are suffering, the scale, research about sentience, knowledge about scope insensitivity, and arguments in favour of things like speciesm. Which makes sense! Not everybody is looking at it deeply.
But this doesn’t provide a very good context for relying on the wisdom of the crowd.
One could also consider the general EA / EA-adjacent sentiment over time as a cross-check on the risk of current groupthink. Of course, later EAs could be responding to better evidence not available to earlier EAs. But I would also consider the possibility of changes in other factors (like perceived status, available funding for EAs, perceived lack of novel opportunities in a mature cause area that has strong interventions with near-limitless room for more funding) playing a major role.
I I think this is an interesting dilemma, and I am sympathetic to some extent (even as an animal rights activist). At the heart of your concern are 3 things:
Being too radical risks losing popular support
Being too radical risks being wrong and causing more harm than good
How do we decide what ethical system is right or preferable without resorting to power or arbitrariness?
I think in this case, 2) is of lesser concern. It does seem like adults tend to give far more weight to humans than animals (a majority of a sample would save 1 human over 100 dogs), though interestingly children seem to be much less speciesist (Wilks et al., 2020). But I think we have good reasons to give substantial moral weight to animals. Given that animals have central nervous systems and nociceptors like we do, and given that we evolved from a long lineage of animals, we should assume that we inherited our ability to suffer from our evolutionary ancestors rather than uniquely developing it ourselves. Then there’s evidence, such as (if I remember correctly) that animals will trade off material benefits for analgesics. And I believe the scientific consensus has consistently and overwhelmingly been that animals feel pain. Animals are also in the present and the harms are concrete, so animal rights is not beset by some of the concerns that, say, long-termist causes are. So I think the probability that we will be wrong about animal rights is negligible.
I sympathize with the idea that being too radical risks losing support. I’ve definitely had that feeling myself in the past when I saw animal rights activists who preferred harder tactics, and I still have my disagreements with some of their tactics and ideas. But I’ve come to see the value in taking a bolder stance as well. From my experience (yes, on a college campus, but still), many people are surprisingly willing to engage with discussions about animal rights and about personally going vegan. Some are even thankful or later go on to join us in our efforts to advocate for animals. I think for many, it’s a matter of educating them about factory farming, confronting them with the urgency of the problem, and giving them space to reflect on their values. And even if you don’t believe in the most extreme tactics, I think it’s hard to defend not advocating for animal rights at all. Just a few centuries ago, slavery was still widely accepted and practiced, and abolitionism was a minority opinion which often received derision and even threats of harm. The work of abolitionists was nevertheless instrumental in getting society to change its attitudes and its ways such that the average person today (at least in the West) would find slavery abhorrent. Indeed, people would roundly agree that slavery is wrong even if they were told to imagine that the enslaved person’s welfare increased due to their slavery (based on a philosophy class I took years ago). To make progress toward the good, society needs people who will go against the current majority.
And this may lead to the final question of how we decide what is right and what is wrong. This I have no rigorous answer to. We are trapped between the Scylla of dogmatism and the Charybdis of relativism. Here I can only echo the point I made above. I agree that we must give some weight to the majority morality, and that to immediately jump ten steps ahead of where we are is impractical and perhaps dangerous. But to veer too far into ossification and blind traditionalism is perhaps equally dangerous. I believe we must continue the movement and the process towards greater morality as best we can, because we see how atrocious the morality of the past has been and the evidence that the morality of the present is still far from acceptable.
the average animal in a factory farm is likely to view the idea that you could ever elevate the suffering of one human over that of an unbounded amount of animal children to be abhorrent, too.
[note: i only swapped the order of humans/animals. my mind predicts that, at least without this text, this statement, but not the quoted one, would elicit negative reactions or be perceived as uncivil, despite the symmetry, because this kind of rhetoric is only normal/socially acceptable in the original case.]
if giving epistemic weight to to popular morality (as you wrote you favor)[1], you’d still need to justify excluding from that the moralities of members of non-dominant species, otherwise you end up unjustly giving all that epistemic weight to whatever might-makes-right coalition takes over the planet / excludes others from ‘the public’ (such as by locking the outgroup in factory slaughter facilities, or extermination camps, or enslaving them), because only their dominant morality is being perceived.
otherwise, said weight would be distributed in a way which is inclusive of animals (or nazi-targeted groups, or enslaved people, in the case of those aforementioned moral catastrophes).
this seems to characterize the split as: supporting humans comes from empathy, supporting animal minds comes from ‘cold logic and math’. but (1) the EA case for either would involve math/logic, and (2) many feel empathy for animals too.
(to be clear, i don’t agree, this is just a separate point)
Yes, of course. My point isn’t that they are right though. Chickens can’t become EAs. Only humans can. My point was that from the perspective of convincing humans to become EAs, choosing to emphasize animal welfare is going to make the job more difficult, because currently many non-EA humans are less sympathetic to animal suffering than human suffering.
Giving more epistemic weight to popular morality is in the light that we need popular support to get things done, and is a compromise with reality, rather than an ideal, abstract goal. To the extent that I think it should inform our priors, we cannot actually canvas the opinions of chickens or other species to get their moralities. We could infer it, but this would be us imagining what they would think, and speculative. I agree that ideally, if we could, we should also get those other preferences taken into consideration. I’m just using the idea of human democracy as a starting point for establishing basic priors in a way that is tractable.
Yes, many feel empathy for animals, myself included. I should point out that I am not advocating for ignoring animal suffering. If it were up to me, I’d probably allocate the funds by splitting them evenly between global health and animal welfare, as a kind of diversified portfolio strategy of cause areas. I consider that the more principled way of handling the grave uncertainty that suffering estimates without clear confidence intervals entails to me. Note that even this would be a significant increase in relative allocation to animal welfare compared to the current situation.
That’s not the position I was responding to. Here is what you wrote:
That seems like you’re proposing actually giving epistemic weight to the beliefs of the public, not just { pretending to have the views of normal humans, possibly only during outreach }. My response is to that.
From your current comment:
Epistemic (and related terms you used, like priors) are about how you form beliefs about what is true. They are not about how you should act, so there cannot be an ‘epistemic compromise with the human public’ in the sense you wrote—that would instead be called, ‘pretending to have beliefs closer to theirs, to persuade them to join our cause’. To say you meant the latter thing by ‘epistemic weight’ seems like a definitional retreat to me: changing the definition of some term to make it seem like one meant something different all along.
(Some humans perform definitional retreats without knowing it, typically when their real position is not actually pinned down internally and they’re coming up with arguments on the spot that are a compromise between some internal sentiment and what others appear to want them to believe. But in the intentional case, this would be dishonest.)
There’s not actually any impractical ‘ideal-ness’ to it. We already can factor in animal preferences, because we already know them, because they reactively express their preference to not be in factory farms.
(Restating your position as this also seems dishonest to me; you’ve displayed awareness of animals’ preferences from the start, so you can’t believe that it’s intractable to consider them.)
I do think we should establish our priors based on what other people think and teach us. This is how all humans normally learn anything that is outside their direct experience. A way to do this is to democratically canvas everyone to get their knowledge. That establishes our initial priors about things, given that people can be wrong, but many people are less likely to all be wrong about the same thing. False beliefs tend to be uncorrelated, while true beliefs align with some underlying reality and correlate more strongly. We can then modify our priors based on further evidence from things like direct experience or scientific experiments and analysis or whatever other sources you find informative.
I should clarify, I am not saying we should pretend to have beliefs closer to theirs. I am saying that having such divergent views will make it harder to recruit them as EAs. It would therefore be better for EA as a movement if our views didn’t diverge as much. I’m not saying to lie about what we believe to recruit them. That would obviously fail as soon as they figured out what we actually believe, and is also dishonest and lacks integrity.
And I think there can be epistemic compromise. You give the benefit of the doubt to other views by admitting your uncertainty and allowing the possibility that you’re wrong, or they’re wrong, and we could all be wrong and the truth is some secret third thing. It’s basic epistemic humility to agree that we all have working but probably wrong models of the world.
And I apologize for the confusion. I am, as you suggested, still trying to figure out my real position, and coming up with arguments on the spot that mix my internal sentiments with external pressures in ways that may seem incoherent. I shouldn’t have made it sound like I was suggesting compromising by deception. Calling things less than ideal and a compromise with reality was a mistake on my part.
I think the most probable reason I worded it that way was that I felt that it wasn’t ideal to only give weight to the popular morality of the dominant coalition, which you pointed out the injustice of. Ideally, we should canvas everyone, but because we can’t canvas the chickens, it is a compromise in that sense.
Thank you for acknowledging that.
Considering or trying on different arguments is good, but I’d suggest doing it explicitly. For example, instead of “I meant X, not Y” (unless that’s true), “How about new-argument X?” is a totally valid thing to say, even if having (or appearing to have) pinned-down beliefs might be higher status or something.
Some object-level responses:
This sounds like it’s saying: “to make it easier to recruit others, our beliefs should genuinely be closer to theirs.” I agree that would not entail lying about one’s beliefs to the public, but I think that would require EAs lying to themselves[1] to make their beliefs genuinely closer to what’s popular.
For one’s beliefs about what is true to be influenced by anything other than evidence it might be or not be true, is an influence which will tend to diverge from what is true, by definition.
I don’t think EAs should (somehow subtly) lie to themselves. If I imagine the EA which does this, it’s actually really scary, in ways I find hard to articulate.
Sure, there can be epistemic compromise in that other sense, where you know there’s some probability of your reasoning being incorrect, or where you have no reason to expect yourself to be correct over someone who is as good at reasoning and also trying to form correct beliefs.
But it’s not something done because ‘we need popular support to get things done’.
this reminded me of this: If we can’t lie to others, we will lie to ourselves by Paul Christiano.
Yeah, I should probably retract the “we need popular support to get things done” line of reasoning.
I think lying to myself is probably, on reflection, something I do to avoid actually lying to others, as described in that link in the footnote. I kind of decide that a belief is “plausible” and then give it some conditional weight, a kind of “humour the idea and give it the benefit of the doubt”. It’s kind of a technicality thing that I do because I’m personally very against outright lying, so I’ve developed a kind of alternative way of fudging to avoid hurt feelings and such.
This is likely related to the “spin” concept that I adopted from political debates. The idea of “spin” to me is to tell the truth from an angle that encourages a perception that is favourable to the argument I am trying to make. It’s something of a habit, and most probably epistemically highly questionable and something I should stop doing.
I think I also use these things to try to take an intentionally more optimistic outlook and be more positive in order to ensure best performance at tasks at hand. If you think you can succeed, you will try harder and often succeed where if you’d been pessimistic you’d have failed due to lack of resolve. This is an adaptive response, but it admittedly sacrifices some accuracy about the actual situation.
Though, what if I consider the fact that many people have independently reached a certain belief to itself be evidence that that belief might be true?
that is a form of evidence. if people’s beliefs all had some truly-independent probability of being correct, then in a large society it would become extreme evidence for any belief that >50% of people have, but it’s not actually true that people’s beliefs are independent.
human minds are similar, and human cultural environments are similar. often people’s conclusions aren’t actually independent, and often they’re not actually conclusions but are unquestioned beliefs internalized from their environment (parents, peers, etc). often people make the same logical mistakes, because they are similar entities (humans).
you still have to reason about that premise, “peoples conclusions about <subject> are independent”, as you would any other belief.
and there are known ways large groups of humans can internalize the same beliefs, with detectable signs like ‘becoming angry when the idea is questioned’.
(maybe usually humans will be right, because most beliefs are about low level mundane things like ‘it will be day tomorrow’. but the cases where we’d like to have such a prior are exactly those non-mundane special cases where human consensus can easily be wrong.)
This answer feels like a very honest reflection on oneself, I like it.
Oh, you edited your comment while I was writing my initial response to it.
We can infer their preferences not to suffer, but we can’t know what their “morality” is. I suspect chickens and most animals in general are very speciesist and probably selfish egoists who are partial to next-of-kin, but I don’t pretend to know this.
It’s getting late in my time zone, and I’m getting sleepy, so I may not reply right away to future comments.
Agreed, I mean that just for this subject of factory farming, it’s tractable to know their preferences.