A quick comment is that this seems to build largely on RP’s own work on sentience and welfare ranges. I think it would be good to at least state that these calculations and conclusions are contingent on your previous conclusions being fairly accurate, and that if your previous estimates were way off, then these calculations would look very different. This doesn’t make the work any less valuable.
This isn’t a major criticism, but I think its important to clearly state major assumptions in serious analytic work of this nature.
And a question, perhaps a little nitpicky ”However, given that EV is a function of sentience and numerosity....” Isn’t EV a function of Welfare range and numerosity? I thought probability of sentience is a component of the welfare range but not the whole shebang, but I might be missing something.
Also I didn’t fully understand the significance of this, would appreciate further explanation if anyone can be bothered :)
”There is more uncertainty about the probability that chickens are sentient than that humans are, but there is a smaller range of uncertainty for chicken sentience than for small invertebrates. We did not provide a formal model of ambiguity avoidance, so we aren’t entirely confident here, but we doubt that the amount of ambiguity in chicken sentience will be enough to tip the scales in favor of helping humans. Likewise, there is probably more value in researching small invertebrate sentience than chicken sentience, given that resolving the smaller amount of ambiguity about the latter is unlikely to make a significant difference to overall value comparisons.”
On the first point, we tried to provide general formulae that allow people to input their own risk weightings, welfare ranges, probabilities of sentience, etc. We did use RP’s estimates as a starting point for setting these parameters. At some points (like fn 23), we note important thresholds at which a model will render different verdicts about causes. If anyone has judgments about various parameters and choices of risk models, we’re happy to hear them!
On the second point, I totally agree that welfare range matters as well (so your point isn’t nitpicky). I spoke too quickly. We incorporate this in our estimations of how much value is produced by various interventions (we assume that shrimp interventions create less value/individual than human ones).
On the third point, a few things to say. First, while there are some approaches to ambiguity aversion in the literature, we haven’t committed to or formally explored any one of them here (for various reasons). If you like a view that penalizes ambiguity—with more ambiguous probabilities penalized more strongly—then the more uncertain you are about the target species’ sentience, the more you should avoid gambles involving them. Second, we suspect that we’re very certain about the probability of human sentience, pretty certain about chickens, pretty uncertain about shrimp, and really uncertain about AIs. For example, I will entertain a pretty narrow range of probabilities about chicken sentience (say, between .75 and 1) but a much wider range for shrimp (say, between .05 and .75). To the extent that more research would resolve these ambiguities, and there is more ambiguity regarding invertebrates and AI, then we should care a lot about researching them!
A quick comment is that this seems to build largely on RP’s own work on sentience and welfare ranges. I think it would be good to at least state that these calculations and conclusions are contingent on your previous conclusions being fairly accurate, and that if your previous estimates were way off, then these calculations would look very different. This doesn’t make the work any less valuable.
This isn’t a major criticism, but I think its important to clearly state major assumptions in serious analytic work of this nature.
And a question, perhaps a little nitpicky
”However, given that EV is a function of sentience and numerosity....” Isn’t EV a function of Welfare range and numerosity? I thought probability of sentience is a component of the welfare range but not the whole shebang, but I might be missing something.
Also I didn’t fully understand the significance of this, would appreciate further explanation if anyone can be bothered :)
”There is more uncertainty about the probability that chickens are sentient than that humans are, but there is a smaller range of uncertainty for chicken sentience than for small invertebrates. We did not provide a formal model of ambiguity avoidance, so we aren’t entirely confident here, but we doubt that the amount of ambiguity in chicken sentience will be enough to tip the scales in favor of helping humans. Likewise, there is probably more value in researching small invertebrate sentience than chicken sentience, given that resolving the smaller amount of ambiguity about the latter is unlikely to make a significant difference to overall value comparisons.”
Thanks for your comments, Nick.
On the first point, we tried to provide general formulae that allow people to input their own risk weightings, welfare ranges, probabilities of sentience, etc. We did use RP’s estimates as a starting point for setting these parameters. At some points (like fn 23), we note important thresholds at which a model will render different verdicts about causes. If anyone has judgments about various parameters and choices of risk models, we’re happy to hear them!
On the second point, I totally agree that welfare range matters as well (so your point isn’t nitpicky). I spoke too quickly. We incorporate this in our estimations of how much value is produced by various interventions (we assume that shrimp interventions create less value/individual than human ones).
On the third point, a few things to say. First, while there are some approaches to ambiguity aversion in the literature, we haven’t committed to or formally explored any one of them here (for various reasons). If you like a view that penalizes ambiguity—with more ambiguous probabilities penalized more strongly—then the more uncertain you are about the target species’ sentience, the more you should avoid gambles involving them. Second, we suspect that we’re very certain about the probability of human sentience, pretty certain about chickens, pretty uncertain about shrimp, and really uncertain about AIs. For example, I will entertain a pretty narrow range of probabilities about chicken sentience (say, between .75 and 1) but a much wider range for shrimp (say, between .05 and .75). To the extent that more research would resolve these ambiguities, and there is more ambiguity regarding invertebrates and AI, then we should care a lot about researching them!