What follows are some probability-of-sentience- and rate-of-subjective-experience-adjusted welfare range estimates.
The probability of sentience is multiplied through here, right? Some of these animals are assigned <50% probability of sentience but have nonzero probability of sentience-adjusted welfare ranges at the median. Another way to present this would be to construct the random variable that’s 0 if they’re not sentient, and then equal to the random variable representing their moral weight conditional on sentience. This would be your actual distribution of welfare ranges for the animal, accounting for their probability of sentience. That being said, what you have now might be more useful to represent a range of expected moral weights for (approximately) risk-neutral EV-maximizing utilitarians, to represent deep uncertainty or credal fragility.
The probability of sentience is multiplied through here, right? Some of these animals are assigned <50% probability of sentience but have nonzero probability of sentience-adjusted welfare ranges at the median. Another way to present this would be to construct the random variable that’s 0 if they’re not sentient, and then equal to the random variable representing their moral weight conditional on sentience. This would be your actual distribution of welfare ranges for the animal, accounting for their probability of sentience. That being said, what you have now might be more useful to represent a range of expected moral weights for (approximately) risk-neutral EV-maximizing utilitarians, to represent deep uncertainty or credal fragility.