Jeff, are you saying you think “an intuition that a human year was worth about 100-1000 times more than a chicken year” is a starting point of “unusually pro-animal views”?
In some sense, this seems true relative to most humans’ implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American’s views about global health and development. Generally, it doesn’t seem to buy much to frame things relative to people who’ve never thought about a given topic substantively and I don’t think you’d think this would be a good critique of a foreign aid think tank looking into how much to value global health and development.
Maybe you are making a different point here?
Also, it would help more if you were being explicit about what you think a neutral baseline is. What would you consider more typical or standard views about animals from which to update? Moment to moment human experience is worth 10,000x that of a chicken conditional on chickens being sentient? 1,000,000x? And, whatever your position, why do you think that is a more reasonable starting point?
are you saying you think “an intuition that a human year was worth about 100-1000 times more than a chicken year” is a starting point of “unusually pro-animal views”? … What would you consider more typical or standard views about animals from which to update?
I did say that, and at the time I wrote that I would have predicted that in realistic situations requiring people to trade off harms/benefits going to humans vs chickens the median respondent would just always choose the human (but maybe that’s just our morality having a terrible sense of scale), and Peter’s 300x mean would have put him somewhere around 95th percentile.
Since writing that I read Michael Dickens’ comment, linking to this SSC post summarizing the disagreements [1] and I’m now less sure. It’s hard for me to tell exactly what they surveys included: for example, I think they excluded people who didn’t think animals have moral worth at all, and it’s not clear to me whether they were getting people to compare lives vs life years. I don’t know if there’s anything better on this?
it doesn’t seem to buy much to frame things relative to people who’ve never thought about a given topic substantively
I agree! I’m not trying to say that uninformed people’s off-the-cuff guesses about moral weights are very informative on what moral weights we should have. Instead, I’m saying that people start with a wide range of background assumptions and if two people started off with 5th and 95th percentile views trading off benefits/harms to chickens vs humans I expect them to end up farther apart in their post-investigation views than two people who started at 95th.
[1] That post cites David Moss from RP as having run a better survey, and summarizes it, but doesn’t link to it—I’m guessing this is because it was Moss doing something informally with SSC and the SSC post is the canonical source, but if there’s a full writeup of the survey I’d like to see it!
What do you think of this rephrasing of your original argument:
I suspect people rarely get deeply interested in the the value of foreign aid unless they come in with an unusually high initial intuitive view that being human is what matters, not being in my country… If you somehow could convince a research group, not selected for caring non-Americans, to pursue this question in isolation, I’d predict they’d end up with far less foreign aid-friendly results.
I think this argument is very bad and I suspect you do too. You can rightfully point out that in this context someone starting out at the 5th percentile before going into a foreign aid investigation and then determining foreign aid is much more valuable than the general population thinks would be, in some sense, stronger evidence than if they had instead started at the 95th percentile. However, that seems not super relevant. What’s relevant is whether it is defensible at all to norm to a population based on their work on a topic given a question of values like this (that or if there were some disanalogy between this and animals).
Generally, I think the typical American when faced with real tradeoffs (they actually are faced with these tradeoffs implicitly as part of a package vote) don’t value the lives of the global poor equally to the lives of their fellow Americans. More importantly, I think you shouldn’t norm where your values on global poverty end up after investigation back to what the typical American thinks. I think you should weigh the empirical and philosophical evidence about how to value the lives of the global poor directly and not do too much, if any, reference class checking about other people’s views on the topic. The same argument holds for whether and how much we should value people 100 years from now after accounting for empirical uncertainty.
Fundamentally, the question isn’t what people substantively do think (except for practical purposes), the question is what beliefs are defensible after weighing the evidence. I think it’s fine to be surprised by what RP’s moral weight work says on capacity for welfare, and I think there are still high uncertainty in this domain. I just don’t think either of our priors, or the general population’s priors, about the topic should be taken very seriously.
What do you think of this rephrasing of your original argument: I suspect people rarely get deeply interested in the the value of foreign aid … I think this argument is very bad and I suspect you do too.
First, I think GiveWell’s research, say, is mostly consumed by people who agree people matter equally regardless of which country they live in. Which makes this scenario more similar to my “When using the moral weights of animals to decide between various animal-focused interventions this is not a major concern: the donors, charity evaluators, and moral weights researchers are coming from a similar perspective.”
But say I argued that the US Department of Transportation funding ($12.5M/life) should be redirected to foreign aid until they had equal marginal costs per life saved. I don’t think the objection I’d get would be “Americans have greater moral value” but instead things like “saving lives in other countries is the role of private charity, not the government”. In trying to convince people to support global health charities I don’t think I’ve ever gotten the objection “but people in other countries don’t matter” or “they matter far less than Americans”, while I expect vegan advocates often hear that about animals.
In trying to convince people to support global health charities I don’t think I’ve ever gotten the objection “but people in other countries don’t matter” or “they matter far less than Americans”, while I expect vegan advocates often hear that about animals.
I have gotten the latter one explicitly and the former implicitly, so I’m afraid you should get out more often :).
More generally, that foreigners and/or immigrants don’t matter, or matter little compared to native born locals, is fundamental to political parties around the world. It’s a banal take in international politics. Sure, some opposition to global health charities is an implied or explicit empirical claim about the role of government. But fundamentally, not all of it as a lot of people don’t value the lives of the out-group and people not in your country are in the out-group (or at least not in the in-group) for much of the world’s population.
First, I think GiveWell’s research, say, is mostly consumed by people who agree people matter equally regardless of which country they live in.
GiveWell donors are not representative of all humans. I think a large fraction of humanity would select the “we’re all equal” option on a survey but clearly don’t actually believe it or act on it (which brings us back to revealed preferences in trades like those humans make about animal lives).
But even if none of that is true, were someone to make this argument about the value of the global poor, the best moral (I make no claims about what’s empirically persuasive) response is “make a coherent and defensible argument against the equal moral worth of humans including the global poor”, and not something like “most humans actually agree that the global poor have equal value so don’t stray too far from equality in your assessment.” If you do the latter, you are making a contingent claim based on a given population at a given time. To put it mildly, for most of human history I do not believe we even would have gotten people to half-heartedly select the “moral equality for all humans” option on a survey. For me at least, we aren’t bound in our philosophical assessment of value by popular belief here or for animal welfare.
I have gotten the latter one explicitly and the former implicitly, so I’m afraid you should get out more often :).
Yikes; ugh. Probably a lot of this is me talking to so many college students in the Northeast.
“make a coherent and defensible argument against the equal moral worth of humans including the global poor”
I think maybe I’m not being clear enough about what I’m trying to do with my post? As I wrote to Wayne below, what I’m hoping happens is:
Some people who don’t think animals matter very much respond to RP’s weights with “that seems really far from where I’d put them, but if those are really right then a lot of us are making very poor prioritization decisions”.
Those people put in a bunch of effort to generate their own weights.
Probably those weights end up in a very different place, and then there’s a lot of discussion, figuring out why, and identifying the core disagreements.
Jeff, are you saying you think “an intuition that a human year was worth about 100-1000 times more than a chicken year” is a starting point of “unusually pro-animal views”?
In some sense, this seems true relative to most humans’ implied views by their actions. But, as Wayne pointed out above, this same critique could apply to, say, the typical American’s views about global health and development. Generally, it doesn’t seem to buy much to frame things relative to people who’ve never thought about a given topic substantively and I don’t think you’d think this would be a good critique of a foreign aid think tank looking into how much to value global health and development.
Maybe you are making a different point here?
Also, it would help more if you were being explicit about what you think a neutral baseline is. What would you consider more typical or standard views about animals from which to update? Moment to moment human experience is worth 10,000x that of a chicken conditional on chickens being sentient? 1,000,000x? And, whatever your position, why do you think that is a more reasonable starting point?
I did say that, and at the time I wrote that I would have predicted that in realistic situations requiring people to trade off harms/benefits going to humans vs chickens the median respondent would just always choose the human (but maybe that’s just our morality having a terrible sense of scale), and Peter’s 300x mean would have put him somewhere around 95th percentile.
Since writing that I read Michael Dickens’ comment, linking to this SSC post summarizing the disagreements [1] and I’m now less sure. It’s hard for me to tell exactly what they surveys included: for example, I think they excluded people who didn’t think animals have moral worth at all, and it’s not clear to me whether they were getting people to compare lives vs life years. I don’t know if there’s anything better on this?
I agree! I’m not trying to say that uninformed people’s off-the-cuff guesses about moral weights are very informative on what moral weights we should have. Instead, I’m saying that people start with a wide range of background assumptions and if two people started off with 5th and 95th percentile views trading off benefits/harms to chickens vs humans I expect them to end up farther apart in their post-investigation views than two people who started at 95th.
[1] That post cites David Moss from RP as having run a better survey, and summarizes it, but doesn’t link to it—I’m guessing this is because it was Moss doing something informally with SSC and the SSC post is the canonical source, but if there’s a full writeup of the survey I’d like to see it!
David’s post is here: Perceived Moral Value of Animals and Cortical Neuron Count
What do you think of this rephrasing of your original argument:
I think this argument is very bad and I suspect you do too. You can rightfully point out that in this context someone starting out at the 5th percentile before going into a foreign aid investigation and then determining foreign aid is much more valuable than the general population thinks would be, in some sense, stronger evidence than if they had instead started at the 95th percentile. However, that seems not super relevant. What’s relevant is whether it is defensible at all to norm to a population based on their work on a topic given a question of values like this (that or if there were some disanalogy between this and animals).
Generally, I think the typical American when faced with real tradeoffs (they actually are faced with these tradeoffs implicitly as part of a package vote) don’t value the lives of the global poor equally to the lives of their fellow Americans. More importantly, I think you shouldn’t norm where your values on global poverty end up after investigation back to what the typical American thinks. I think you should weigh the empirical and philosophical evidence about how to value the lives of the global poor directly and not do too much, if any, reference class checking about other people’s views on the topic. The same argument holds for whether and how much we should value people 100 years from now after accounting for empirical uncertainty.
Fundamentally, the question isn’t what people substantively do think (except for practical purposes), the question is what beliefs are defensible after weighing the evidence. I think it’s fine to be surprised by what RP’s moral weight work says on capacity for welfare, and I think there are still high uncertainty in this domain. I just don’t think either of our priors, or the general population’s priors, about the topic should be taken very seriously.
Awesome, thanks! Good post!
First, I think GiveWell’s research, say, is mostly consumed by people who agree people matter equally regardless of which country they live in. Which makes this scenario more similar to my “When using the moral weights of animals to decide between various animal-focused interventions this is not a major concern: the donors, charity evaluators, and moral weights researchers are coming from a similar perspective.”
But say I argued that the US Department of Transportation funding ($12.5M/life) should be redirected to foreign aid until they had equal marginal costs per life saved. I don’t think the objection I’d get would be “Americans have greater moral value” but instead things like “saving lives in other countries is the role of private charity, not the government”. In trying to convince people to support global health charities I don’t think I’ve ever gotten the objection “but people in other countries don’t matter” or “they matter far less than Americans”, while I expect vegan advocates often hear that about animals.
I have gotten the latter one explicitly and the former implicitly, so I’m afraid you should get out more often :).
More generally, that foreigners and/or immigrants don’t matter, or matter little compared to native born locals, is fundamental to political parties around the world. It’s a banal take in international politics. Sure, some opposition to global health charities is an implied or explicit empirical claim about the role of government. But fundamentally, not all of it as a lot of people don’t value the lives of the out-group and people not in your country are in the out-group (or at least not in the in-group) for much of the world’s population.
GiveWell donors are not representative of all humans. I think a large fraction of humanity would select the “we’re all equal” option on a survey but clearly don’t actually believe it or act on it (which brings us back to revealed preferences in trades like those humans make about animal lives).
But even if none of that is true, were someone to make this argument about the value of the global poor, the best moral (I make no claims about what’s empirically persuasive) response is “make a coherent and defensible argument against the equal moral worth of humans including the global poor”, and not something like “most humans actually agree that the global poor have equal value so don’t stray too far from equality in your assessment.” If you do the latter, you are making a contingent claim based on a given population at a given time. To put it mildly, for most of human history I do not believe we even would have gotten people to half-heartedly select the “moral equality for all humans” option on a survey. For me at least, we aren’t bound in our philosophical assessment of value by popular belief here or for animal welfare.
Yikes; ugh. Probably a lot of this is me talking to so many college students in the Northeast.
I think maybe I’m not being clear enough about what I’m trying to do with my post? As I wrote to Wayne below, what I’m hoping happens is:
Some people who don’t think animals matter very much respond to RP’s weights with “that seems really far from where I’d put them, but if those are really right then a lot of us are making very poor prioritization decisions”.
Those people put in a bunch of effort to generate their own weights.
Probably those weights end up in a very different place, and then there’s a lot of discussion, figuring out why, and identifying the core disagreements.