In fact, you could assign 20% of your credence to the hypothesis that animals have welfare ranges of zero: that still wouldnāt cut our estimates by 10x.
I agree. Decreasing the weight of the considered set of models by 20 %, and moving it to one leading to a welfare range of 0 would only decrease the welfare range by 20 %. However, I worry the weights of the models are close to arbitrary. In your book about comparing welfare across species, there seems to be only 1 line about the weights. āWe assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive modelā. People usually give weights that are at least 0.1/āānumber of modelsā, which is at least 3.33 % (= 0.1/ā3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/āānumber of modelsā could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to ādistanceā^-2 (correct answer), ādistanceā^-20, or ādistanceā^-200, I imagine I would get a significant fraction picking the exponents of ā20 and ā200. Assuming 60 % picked ā2, 20 % picked ā20, and 20 % picked ā200, one may naively conclude the mean exponent of ā45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have no idea about the right exponent because they would not be able to adequately justify their picks.
Hi Bob.
I agree. Decreasing the weight of the considered set of models by 20 %, and moving it to one leading to a welfare range of 0 would only decrease the welfare range by 20 %. However, I worry the weights of the models are close to arbitrary. In your book about comparing welfare across species, there seems to be only 1 line about the weights. āWe assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive modelā. People usually give weights that are at least 0.1/āānumber of modelsā, which is at least 3.33 % (= 0.1/ā3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/āānumber of modelsā could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to ādistanceā^-2 (correct answer), ādistanceā^-20, or ādistanceā^-200, I imagine I would get a significant fraction picking the exponents of ā20 and ā200. Assuming 60 % picked ā2, 20 % picked ā20, and 20 % picked ā200, one may naively conclude the mean exponent of ā45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have no idea about the right exponent because they would not be able to adequately justify their picks.