Love points one to three from Bob! Perhaps unsurprisingly once he starts disagreeing with me I have some issues.
1. I think I’ve been misrepresented somewhat. I never claimed that the moral weights project did this “Sum the number of proxies found for a species and divide by the total number of proxies to get the welfare range.”
What I said in the comment was “BOTH their sentience ranges and their behavior scores rely heavily on the presence of pain response behavior”.
And In a previous post I did comment that Median final welfare ranges are fairly well approximated by the simple formula Behavioural Proxy sum x Sentience (see graph).
So indeed headline numbers did actually turn out pretty close to the rsult your statement..… “If that were true, then the number of proxies would straightforwardly determine the maximum difference in welfare ranges” . I might be misinterpreting what you mean by this though.
(Behavioral proxy percent) x (Probability of Sentience) = Median Welfare range
2. I stand by (for the moment) my opinion that both the behavioral proxies and Sentient probabilities DO seem to guarantee pretty high final moral weight numbers, although we all will have very different opinions on what ‘high’ means.
I don’t understand how you chose the 0.5% chance of sentience for your low-end calculation? Its far lower than any number in your model Thelowest number in your sentience modelling for a nematode is 6.8%, and the silkworm which was included in the MWP is 8.5%. Why pick a number for the example 13x lower than you model actually generated? The 6.8% number from your model would bring a calculation of more like 0.068x0.875x 5⁄80 which equals 0.0037, or 0.37% as a low end number. This by my lights at least isn’t a very low baseline moral weight, but I understand if some would consider that a decently low baseline.
I agree that you have individual models with a low baseline but I’m discussing your overall process. Using your original overall process I still think that high numbers are guaranteed. Also if your method decides to combine bunch of models where some of them are close to P = 1, balanced with other models which are P=0.00001, then you’re going to get something in-the-middle-ish (say 0.2-0.8) which also seems high to me.
Also as a side note (less important) I think that 5⁄80 for behavioural proxies is pretty hard to get for anything that moves around. Anything that has evolved to move is likely from an evolutionary standpoint to be attracted to things, withdraw from things and have some kind of way to remember that—otherwise they wouldn’t survive. Maybe that does mean that anything that has evolved to move has a high chance of being sentient though, its an interesting question I know has been discussed before (Can’t remember where).
I was surprised to hear “We don’t really put much stock in the probability of sentience estimates, which weren’t the focus of the project and are subject to much more uncertainty than the welfare range estimates themselves conditional on sentience”. Given that the sentience number is half the final calculation for your headline numbers, which are used freely and widely for expected value calcluations, the sentience number seems pretty important. It also does seem like you put a lot of work into estimating them. Given this statement “On reflection, I think lower numbers are more appropriate than 6.8%, and I really would not anchor on that as “RP’s own lights” I wonder whether reasonable options might be
1. Review the sentience numbers from the project and adjust them to where your thinking is now 2. Not publish a sentience-adjusted moral weight—Instead publish your unadjusted welfare ranges and let people choose their own best-guess sentience multiplier.
A few responses to @Bob Fischer and @Laura Duffy
Love points one to three from Bob! Perhaps unsurprisingly once he starts disagreeing with me I have some issues.
1. I think I’ve been misrepresented somewhat. I never claimed that the moral weights project did this “Sum the number of proxies found for a species and divide by the total number of proxies to get the welfare range.”
What I said in the comment was “BOTH their sentience ranges and their behavior scores rely heavily on the presence of pain response behavior”.
And In a previous post I did comment that Median final welfare ranges are fairly well approximated by the simple formula Behavioural Proxy sum x Sentience (see graph).
So indeed headline numbers did actually turn out pretty close to the rsult your statement..… “If that were true, then the number of proxies would straightforwardly determine the maximum difference in welfare ranges” . I might be misinterpreting what you mean by this though.
(Behavioral proxy percent) x (Probability of Sentience) = Median Welfare range
2. I stand by (for the moment) my opinion that both the behavioral proxies and Sentient probabilities DO seem to guarantee pretty high final moral weight numbers, although we all will have very different opinions on what ‘high’ means.
I don’t understand how you chose the 0.5% chance of sentience for your low-end calculation? Its far lower than any number in your model The lowest number in your sentience modelling for a nematode is 6.8%, and the silkworm which was included in the MWP is 8.5%. Why pick a number for the example 13x lower than you model actually generated? The 6.8% number from your model would bring a calculation of more like 0.068x0.875x 5⁄80 which equals 0.0037, or 0.37% as a low end number. This by my lights at least isn’t a very low baseline moral weight, but I understand if some would consider that a decently low baseline.
I agree that you have individual models with a low baseline but I’m discussing your overall process. Using your original overall process I still think that high numbers are guaranteed. Also if your method decides to combine bunch of models where some of them are close to P = 1, balanced with other models which are P=0.00001, then you’re going to get something in-the-middle-ish (say 0.2-0.8) which also seems high to me.
Also as a side note (less important) I think that 5⁄80 for behavioural proxies is pretty hard to get for anything that moves around. Anything that has evolved to move is likely from an evolutionary standpoint to be attracted to things, withdraw from things and have some kind of way to remember that—otherwise they wouldn’t survive. Maybe that does mean that anything that has evolved to move has a high chance of being sentient though, its an interesting question I know has been discussed before (Can’t remember where).
I was surprised to hear “We don’t really put much stock in the probability of sentience estimates, which weren’t the focus of the project and are subject to much more uncertainty than the welfare range estimates themselves conditional on sentience”. Given that the sentience number is half the final calculation for your headline numbers, which are used freely and widely for expected value calcluations, the sentience number seems pretty important. It also does seem like you put a lot of work into estimating them. Given this statement “On reflection, I think lower numbers are more appropriate than 6.8%, and I really would not anchor on that as “RP’s own lights” I wonder whether reasonable options might be
1. Review the sentience numbers from the project and adjust them to where your thinking is now
2. Not publish a sentience-adjusted moral weight—Instead publish your unadjusted welfare ranges and let people choose their own best-guess sentience multiplier.
Thanks for the great points, Nick. Strongly upvoted.