When Bob was selecting the species, he was thinking of adult insects as the edge cases for the model (bees, BSF). He included juveniles to see what the model implies, not because he really thought the model should be extended to them. You’ll notice that, in the book, the species list narrows considerably partly for this reason.
On the points related to sentience-conditioned welfare ranges, e.g. “So an organism having 0 neurons only decreases its welfare range conditional on sentience, and the rate of subjective experience of humans by 1⁄9. I understand having no neurons at all would also lead to a lower probability of sentience, but I think it should directly imply a much larger decrease in the welfare range conditional on sentience.” I think it’s a mistake to point to a hypothetical sentience-conditioned welfare range, which is an intermediate step in the calculations, for an animal that has zero neurons as indicative of an issue with the methodology overall for animals with complex brains.
Put straightforwardly, if an animal has zero neurons, it would have a welfare range of 0 overall, because I would give it a zero percent chance of being sentient, which affects all the models.
I also am not going to put a precise probability of sentience on nematodes, but I do think it’s much much closer to zero and crosses the threshold of being Pascal’s mugged.
I’m finding these discussions very draining and not productive at this point, so will not be engaging further in this debate.
I encourage you to disclaim in the post with RP’s mainline welfare ranges that Bob does not think the methodology used to produce them is applicable to silkworms. In practice, what does this mean? Would it be reasonable to neglect beings to which your methodology is not supposed to apply? Why is the methodology applicable to black soldier flies (BSFs), but not silkworms? I understand a methodology can be more or less applicable, but I still do not understand which concrete criteria you are using. I also think the applicability of the methodology should ideally be taken into account in the estimates such that these are more comparable.
I suggest people account for the lower applicability of your methodology to less complex organisms by using welfare ranges equal to the geometric mean between RP’s mainline welfare ranges, and the number of neurons as a fraction of that of humans. Does this seem reasonable?
I am not certain that neurons are required for an organism to have a non-constant welfare, so I think organisms without neurons have welfare ranges above 0. I guess you mean that organisms without neurons have a welfare range of roughly 0, but exactly how close to 0 matters. As I say in the post, “Rounding to 0 a probability of sentience, or welfare per animal-year close to 0 introduces an infinite amount of scope insensitivity. Regardless of the number of beings affected, the change in their welfare will be estimated to be exactly 0”.
Could you elaborate on why you seem to believe the probability of sentience of nematodes is Pascalianly low, and therefore arguably much lower than RP’s mainline estimate of 6.8 %? I feel like one can reasonably argue from this that the probability of sentience of silkworms is also Pascalianly low, and therefore not worry about improving the conditions of BSFs and mealworms, which RP estimates will be 417 billion in 2033.
Feel free to follow up later if you are finding this discussion draining, and not productive. I think it would be good for RP to write a post clarifying the extent to which the methodology used to produce RP’s mainline welfare ranges apply to the animals covered and not covered, and why.
Hi Vasco,
When Bob was selecting the species, he was thinking of adult insects as the edge cases for the model (bees, BSF). He included juveniles to see what the model implies, not because he really thought the model should be extended to them. You’ll notice that, in the book, the species list narrows considerably partly for this reason.
On the points related to sentience-conditioned welfare ranges, e.g. “So an organism having 0 neurons only decreases its welfare range conditional on sentience, and the rate of subjective experience of humans by 1⁄9. I understand having no neurons at all would also lead to a lower probability of sentience, but I think it should directly imply a much larger decrease in the welfare range conditional on sentience.”
I think it’s a mistake to point to a hypothetical sentience-conditioned welfare range, which is an intermediate step in the calculations, for an animal that has zero neurons as indicative of an issue with the methodology overall for animals with complex brains.
Put straightforwardly, if an animal has zero neurons, it would have a welfare range of 0 overall, because I would give it a zero percent chance of being sentient, which affects all the models.
I also am not going to put a precise probability of sentience on nematodes, but I do think it’s much much closer to zero and crosses the threshold of being Pascal’s mugged.
I’m finding these discussions very draining and not productive at this point, so will not be engaging further in this debate.
Thanks, Laura.
I encourage you to disclaim in the post with RP’s mainline welfare ranges that Bob does not think the methodology used to produce them is applicable to silkworms. In practice, what does this mean? Would it be reasonable to neglect beings to which your methodology is not supposed to apply? Why is the methodology applicable to black soldier flies (BSFs), but not silkworms? I understand a methodology can be more or less applicable, but I still do not understand which concrete criteria you are using. I also think the applicability of the methodology should ideally be taken into account in the estimates such that these are more comparable.
I suggest people account for the lower applicability of your methodology to less complex organisms by using welfare ranges equal to the geometric mean between RP’s mainline welfare ranges, and the number of neurons as a fraction of that of humans. Does this seem reasonable?
I am not certain that neurons are required for an organism to have a non-constant welfare, so I think organisms without neurons have welfare ranges above 0. I guess you mean that organisms without neurons have a welfare range of roughly 0, but exactly how close to 0 matters. As I say in the post, “Rounding to 0 a probability of sentience, or welfare per animal-year close to 0 introduces an infinite amount of scope insensitivity. Regardless of the number of beings affected, the change in their welfare will be estimated to be exactly 0”.
Could you elaborate on why you seem to believe the probability of sentience of nematodes is Pascalianly low, and therefore arguably much lower than RP’s mainline estimate of 6.8 %? I feel like one can reasonably argue from this that the probability of sentience of silkworms is also Pascalianly low, and therefore not worry about improving the conditions of BSFs and mealworms, which RP estimates will be 417 billion in 2033.
Feel free to follow up later if you are finding this discussion draining, and not productive. I think it would be good for RP to write a post clarifying the extent to which the methodology used to produce RP’s mainline welfare ranges apply to the animals covered and not covered, and why.