efinI really loved this piece of work! Great framework and great write-up :)
In particular, I’m excited about the simplicity and specificity of the pain and pleasure categories, and about how you easily encode ethical worldviews in this context.
Some questions and observations:
The use of the different worldviews seems to be in order to answer “how likely is it that A > B” rather than to make some form of an expected value calculation (say, “maximal choice worthiness”—Moral Uncertainty book). Have you considered alternative approaches? In particular, it may be harder to accurately aggregate the preferences across worldviews from multiple interventions and it doesn’t take into account how close is the margin by which to rank.
Regarding the credence in each worldview (last column of this table) - if I understand correctly, the process involved making subjective guesses on how much each worldview “makes sense” from 1-5, and then weighing each worldview’s likelihood linearly with that. I understand that this is intended as a first step and a didactical example, but I’m curious about your thoughts on how this could be improved. I think there are some delicate points where this may go wrong if done quickly, just fleshing out some thoughts:
The different worldviews need to be completely separated from each other and together account for most of the relevant worldviews.
The individual subjective guess per worldview should take into account how likely that is as opposed also to similar worldviews.
Ranking of 1-5 in this setting should be linearly proportional to its probability.
The proposed definitions of the degrees of pleasure use some sort of a preference-based equivalence between the mirrored degrees of pain. Doesn’t that assume that the value of pain-pleasure is symmetric? (perhaps the explanation is that the animal’s preference doesn’t equal its value, but that feels a bit off to me)
As you write, comparing different species can be complicated. If I understand correctly, one approach you suggest would be to start by saying that, say, one unit of hurtful pain hour is not species-dependent. Then, to account for species differences we would “shrink” the pain-pleasure scale for each animal in some way. Is this a correct interpretation? If so, doesn’t that stand against the behavioral definitions of the pain categories?
Re your questions 1 and 2 - Yep I definitely agree that there are better approaches to moral uncertainty. I indeed chose mine for illustrative purposes, as you point out. Moreover, in our application of this framework, the end-line result of “value weighted by framework” just isn’t that important to our decision-making—it’s a small piece of information within a framework that we don’t weight that strongly. For me, the useful information that arises from the moral uncertainty step is seeing whether particular interventions are only strong under particular moral frameworks or whether particular interventions are strong across frameworks. Systematic ways to incorporate a variety of moral frameworks might have value (and I’m certainly not an expert here), but for me the point of these exercises is more to serve as a quantitative guide to qualitative reasoning.
Re your question 3 - yep, you’re right. I noticed this, but didn’t develop the pleasure definitions because it’s pretty rare for positive welfare to be relevant in our day-to-day research/prioritisation (kinda linked to the point above—if an intervention’s strength is conditional on positive welfare being assigned moral importance, then that would serve as an argument against that intervention). In any case, Michael’s comment above links to much better pleasure definitions.
If I understand correctly, one approach you suggest would be to start by saying that, say, one unit of hurtful pain hour is not species-dependent. Then, to account for species differences we would “shrink” the pain-pleasure scale for each animal in some way. Is this a correct interpretation? If so, doesn’t that stand against the behavioral definitions of the pain categories?
Yeah there are a few ways to do it. If you adopt the Rethink approach of welfare ranges, then that should probably be incorporated before the pain scale is used (I think what I just wrote is true given the definition of welfare ranges as used by Rethink, but probably fact check that before you quote me on this!!). I’m still not totally convinced by the welfare range approach (or even by weighting species at all). Again, the point for me is more to ask “Does this intervention depend on lobsters / fly larvae / silk worms / whatever being assigned a particular level of moral importance?” If so, that might be one argument (among many others) that could weaken the intervention.
efinI really loved this piece of work! Great framework and great write-up :)
In particular, I’m excited about the simplicity and specificity of the pain and pleasure categories, and about how you easily encode ethical worldviews in this context.
Some questions and observations:
The use of the different worldviews seems to be in order to answer “how likely is it that A > B” rather than to make some form of an expected value calculation (say, “maximal choice worthiness”—Moral Uncertainty book). Have you considered alternative approaches? In particular, it may be harder to accurately aggregate the preferences across worldviews from multiple interventions and it doesn’t take into account how close is the margin by which to rank.
Regarding the credence in each worldview (last column of this table) - if I understand correctly, the process involved making subjective guesses on how much each worldview “makes sense” from 1-5, and then weighing each worldview’s likelihood linearly with that. I understand that this is intended as a first step and a didactical example, but I’m curious about your thoughts on how this could be improved.
I think there are some delicate points where this may go wrong if done quickly, just fleshing out some thoughts:
The different worldviews need to be completely separated from each other and together account for most of the relevant worldviews.
The individual subjective guess per worldview should take into account how likely that is as opposed also to similar worldviews.
Ranking of 1-5 in this setting should be linearly proportional to its probability.
The proposed definitions of the degrees of pleasure use some sort of a preference-based equivalence between the mirrored degrees of pain. Doesn’t that assume that the value of pain-pleasure is symmetric? (perhaps the explanation is that the animal’s preference doesn’t equal its value, but that feels a bit off to me)
As you write, comparing different species can be complicated. If I understand correctly, one approach you suggest would be to start by saying that, say, one unit of hurtful pain hour is not species-dependent. Then, to account for species differences we would “shrink” the pain-pleasure scale for each animal in some way. Is this a correct interpretation? If so, doesn’t that stand against the behavioral definitions of the pain categories?
Thanks again! :)
Thanks for your feedback and thoughts :)
Re your questions 1 and 2 - Yep I definitely agree that there are better approaches to moral uncertainty. I indeed chose mine for illustrative purposes, as you point out. Moreover, in our application of this framework, the end-line result of “value weighted by framework” just isn’t that important to our decision-making—it’s a small piece of information within a framework that we don’t weight that strongly. For me, the useful information that arises from the moral uncertainty step is seeing whether particular interventions are only strong under particular moral frameworks or whether particular interventions are strong across frameworks. Systematic ways to incorporate a variety of moral frameworks might have value (and I’m certainly not an expert here), but for me the point of these exercises is more to serve as a quantitative guide to qualitative reasoning.
Re your question 3 - yep, you’re right. I noticed this, but didn’t develop the pleasure definitions because it’s pretty rare for positive welfare to be relevant in our day-to-day research/prioritisation (kinda linked to the point above—if an intervention’s strength is conditional on positive welfare being assigned moral importance, then that would serve as an argument against that intervention). In any case, Michael’s comment above links to much better pleasure definitions.
Yeah there are a few ways to do it. If you adopt the Rethink approach of welfare ranges, then that should probably be incorporated before the pain scale is used (I think what I just wrote is true given the definition of welfare ranges as used by Rethink, but probably fact check that before you quote me on this!!). I’m still not totally convinced by the welfare range approach (or even by weighting species at all). Again, the point for me is more to ask “Does this intervention depend on lobsters / fly larvae / silk worms / whatever being assigned a particular level of moral importance?” If so, that might be one argument (among many others) that could weaken the intervention.