I recently wrote a post on the EA forum about turning animal suffering to animal bliss using genetic enhancement. Titotal raised an thoughtful concern: “How do you check that your intervention is working? For example, suppose your original raccoons screech when you poke them, but the genetically engineered racoons don’t. Is that because they are experiencing less pain, or have they merely evolved not to screech?”
This is a very good point. I was recently considering how we could be sure to not just change the expressions of suffering and I believe that I have determined a means of doing so. In psychology, it is common to use factor analysis to study a latent variables—the variables that we cannot measure directly. It seems extremely reasonable to think that animal pain is real, but the trouble is measuring it. We could try to get at pain by getting a huge array of behaviors and measures that are associated with pain (heart rate, cortisol levels, facial expressions, vocalizations, etc.) and find a latent factor of suffering that accounts for some of these behaviors.
To determine if an intervention is successful at changing the latent factor of suffering for the better, we could test for measurement invariance which is an important step in making a relevant comparison between two groups. This basically tests whether the nature of the factor loadings remains the same between groups. This would mean a reduction in all of the traits associated with suffering. This would also seem relevant for environmental interventions as well.
As an illustration: imagine that I measure wefare of a raccoon by the amount of screeching it does. A bad intervention would be taping the raccoons mouth shut. This would reduce screeching, but there is no good reason to think that would alleviate suffering. However, imagine I gave the raccoon a drug and it acted less stressed, screeched less, had less cortisol, and started acting much more friendly. This would be much better evidence of true reduction in suffering.
There is much more to be defended in my thesis but this felt like a thought worth sharing.
You could also do brain imaging to check for pain responses.
You might not even need to know what normal pain responses in the species look like, because you could just check normally painful stimuli vs control stimuli.
However, knowing what normal pain responses in the species look like would help. Also, across mammals, including humans and raccoons, the substructures responsible for pain (especially the anterior cingulate cortex) seem roughly the same, so I think we’d have a good idea of where to check.
Maybe one risk is that the brain would just adapt and recruit a different subsystem to generate pain, or use the same one in a diffefdnt way. But control stimuli could help you detect that.
Another behavioural indicator would be (learned) avoidance of painful stimuli.
Great thoughts. I will need to think more deeply about how to make this possible cost wise. We need a large sample to find the genes, but the brain imaging might make this challenging.
From a utilitarian perspective, it would seem there are substantial benefits to accurate measures of welfare.
I was listening to Adam Mastroianni discuss the history of trying measure happiness and life satisfaction and it was interesting to find a level of stability across the decades. Could it really be that the increases in material wealth do not result in huge objective increases in happiness and satisfaction for humans? It would seem the efforts to increase GDP and improve standard of living beyond the basics may be misdirected.
Furthermore, it seems like it would be extremely helpful in terms of policy creation to have an objective unit like a util.
We could compare human and animal welfare directly, and genetically engineer animals to increase their utils.
While efforts might not super successful, it would seem very important to merely improve objective measures of wellbeing by say 10%.
In conversations of x-risk, one common mistake seems to be to suggest that we have yet to invent something that kills all people and so the historical record is not on the side of “doomers.” The mistake is survivorship bias, and Ćirković, Sandberg, and Bostrom (2010) call this the Anthropic Shadow. Using base rate frequencies to estimate the probability of events that reduce the number of people (observers), will result in bias.
If there are multiple possible timelines and AI p(doom) is super high (and soon), then we would expect a greater frequency of events that delay the creation of AGI (geopolitical issues, regulation, maybe internal conflicts at AI companies, other disaster, etc.). It might be interesting to see if super forecasters consistently underpredict events that would delay AGI. Although, figuring out how to actually interpret this information would be quite challenging unless it’s blatantly obvious.
I guess more likely is that I’m born in a universe with more people and everything goes fine anyway. This is quite speculative and roughly laid out, but something I’ve been thinking about for a while.
I recently wrote a post on the EA forum about turning animal suffering to animal bliss using genetic enhancement. Titotal raised an thoughtful concern: “How do you check that your intervention is working? For example, suppose your original raccoons screech when you poke them, but the genetically engineered racoons don’t. Is that because they are experiencing less pain, or have they merely evolved not to screech?”
This is a very good point. I was recently considering how we could be sure to not just change the expressions of suffering and I believe that I have determined a means of doing so. In psychology, it is common to use factor analysis to study a latent variables—the variables that we cannot measure directly. It seems extremely reasonable to think that animal pain is real, but the trouble is measuring it. We could try to get at pain by getting a huge array of behaviors and measures that are associated with pain (heart rate, cortisol levels, facial expressions, vocalizations, etc.) and find a latent factor of suffering that accounts for some of these behaviors.
To determine if an intervention is successful at changing the latent factor of suffering for the better, we could test for measurement invariance which is an important step in making a relevant comparison between two groups. This basically tests whether the nature of the factor loadings remains the same between groups. This would mean a reduction in all of the traits associated with suffering. This would also seem relevant for environmental interventions as well.
As an illustration: imagine that I measure wefare of a raccoon by the amount of screeching it does. A bad intervention would be taping the raccoons mouth shut. This would reduce screeching, but there is no good reason to think that would alleviate suffering. However, imagine I gave the raccoon a drug and it acted less stressed, screeched less, had less cortisol, and started acting much more friendly. This would be much better evidence of true reduction in suffering.
There is much more to be defended in my thesis but this felt like a thought worth sharing.
You could also do brain imaging to check for pain responses.
You might not even need to know what normal pain responses in the species look like, because you could just check normally painful stimuli vs control stimuli.
However, knowing what normal pain responses in the species look like would help. Also, across mammals, including humans and raccoons, the substructures responsible for pain (especially the anterior cingulate cortex) seem roughly the same, so I think we’d have a good idea of where to check.
Maybe one risk is that the brain would just adapt and recruit a different subsystem to generate pain, or use the same one in a diffefdnt way. But control stimuli could help you detect that.
Another behavioural indicator would be (learned) avoidance of painful stimuli.
Great thoughts. I will need to think more deeply about how to make this possible cost wise. We need a large sample to find the genes, but the brain imaging might make this challenging.
Animal welfare’s far from the only problem with factory farming:
https://forum.effectivealtruism.org/posts/rpBYejrhk7HQB6gJj/crispr-for-happier-farm-animals?commentId=hvCQ9kBvutrnkFm9h
Thanks Pat. That is something good to consider.
From a utilitarian perspective, it would seem there are substantial benefits to accurate measures of welfare.
I was listening to Adam Mastroianni discuss the history of trying measure happiness and life satisfaction and it was interesting to find a level of stability across the decades. Could it really be that the increases in material wealth do not result in huge objective increases in happiness and satisfaction for humans? It would seem the efforts to increase GDP and improve standard of living beyond the basics may be misdirected.
Furthermore, it seems like it would be extremely helpful in terms of policy creation to have an objective unit like a util.
We could compare human and animal welfare directly, and genetically engineer animals to increase their utils.
While efforts might not super successful, it would seem very important to merely improve objective measures of wellbeing by say 10%.
In conversations of x-risk, one common mistake seems to be to suggest that we have yet to invent something that kills all people and so the historical record is not on the side of “doomers.” The mistake is survivorship bias, and Ćirković, Sandberg, and Bostrom (2010) call this the Anthropic Shadow. Using base rate frequencies to estimate the probability of events that reduce the number of people (observers), will result in bias.
If there are multiple possible timelines and AI p(doom) is super high (and soon), then we would expect a greater frequency of events that delay the creation of AGI (geopolitical issues, regulation, maybe internal conflicts at AI companies, other disaster, etc.). It might be interesting to see if super forecasters consistently underpredict events that would delay AGI. Although, figuring out how to actually interpret this information would be quite challenging unless it’s blatantly obvious.
I guess more likely is that I’m born in a universe with more people and everything goes fine anyway. This is quite speculative and roughly laid out, but something I’ve been thinking about for a while.