Hi Nick. Thanks for the kind words about the MWP. We agree that it would be great to have other people tackling this problem from different angles, including ones that are unfriendly to animals. We’ve always said that our work was meant to be a first pass, not the final word. A diversity of perspectives would be valuable here.
For what it’s worth, we have lots of thoughts about how to extend, refine, and reimagine the MWP. We lay out several of them here. In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach. Funding is, and long has been, the bottleneck—which explains why there haven’t been many public updates about the MWP since we finished it (apart from the book, which refines the methodology in notable ways). But if people are interested in supporting these or related projects, we’d be very glad to work on them.
I’ll just add: I’ve long thought that one important criticism of the MWP is that it’s badly named. We don’t actually give “moral weights,” at least if that phrase is understood as “all things considered assessments of the importance of benefiting some animals relative to others” (whether human or nonhuman). Instead, we give estimates of the differences in the possible intensities of valenced states across species—which only double as moral weights given lots of contentious assumptions.
All things considered assessments may be possible. But if we want them, we need to grapple with a huge number of uncertainties, including uncertainties over theories of welfare, operationalizations of theories of welfare, approaches to handling data gaps, normative theories, and much else besides. The full project is enormous and, in my view, is only feasible if tackled collaboratively. So, while I understand the call for independent teams, I’d much prefer a consortium of researchers trying to make progress together.
In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach.
I do not see much value in improving the estimates for the probability of sentience presented in your book. I believe it is more important to decrease the uncertainty in the (expectedhedonistic) welfare per unit time conditional on sentience, which I think is much larger than that in the probability of sentience.
I also worry about analysing just the probability of consciousness/sentience due to this not being independent from the welfare per unit time conditional on consciousness/sentience. Less strict operationalisations of the probability of consciousness/sentience will tend to result in a lower welfare per unit time conditional on consciousness/sentience.
Funding is, and long has been, the bottleneck
Have large funders explained their lack of interest? If not, what is your best guess?
Sorry, Vasco; we weren’t clear. The idea is to use the DCM as a blueprint for aggregating the data we collected in the MWP, not to produce new estimates of sentience. The focus would be on capacity for welfare.
I agree its very difficult and probably impossible to “get right” with a small team of researchers, but I still think (as many people have commented) that there would be great value in truly independent work on this. I think there is too much upside to independent work here to continue with only collaboration, even if reductoin in quality might be a downside.
If work continued with only collaboration, I think the Gravity Well effect mentioned by would be hard to avoid, credibility would be reduced, and that new researchers might find it hard to flesh out new methodology and ideas and in some cases be adversarial if RP’s team was involved from the beginning of any new research.
Of course then collaboration and conversation would come later.
Funding is, and long has been, the bottleneck [for work on moral weights]—which explains why there haven’t been many public updates about the MWP since we finished it (apart from the book, which refines the methodology in notable ways). But if people are interested in supporting these or related projects, we’d be very glad to work on them.
@KarolinaSarek🔸, to which extent is the Animal Welfare Fund (AWF) open to funding research decreasing the uncertainty in comparisons of (expectedhedonistic) welfare across species? @LewisBollard, how about Coefficient Giving (CG)? I think much more of that research is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. Even neglecting these, I believe there is lots of room to change funding decisions as a result of more of that research. I understand Ambitious Impact (AIM), Animal Charity Evaluators (ACE), maybe AWF, and CG sometimes for robustness checks use the (expected) welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob’s book as if they are within a factor of 10 of the right estimates (such that these could 10 % to 10 times as large). However, I can easily see much larger differences. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say one reasonable best guess (though not the only one) is 10^-6, the ratio between the number of neurons of shrimps and humans.
Thanks, Zoë. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bob’s book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RP’s research agenda on valuing impacts across species?
Q: Do Senterra Funders staff decide how funders make grant decisions?
A: No, each Senterra member maintains full autonomy over their grantmaking. Some Senterra members seek Senterra’s philanthropic advising, in which Senterra staff conduct research and make recommendations specific to the donor’s interests. Some Senterra members engage in collaborative grantmaking facilitated by Senterra staff. Ultimately, it’s up to each member to decide how and where to give.
Hi Nick. Thanks for the kind words about the MWP. We agree that it would be great to have other people tackling this problem from different angles, including ones that are unfriendly to animals. We’ve always said that our work was meant to be a first pass, not the final word. A diversity of perspectives would be valuable here.
For what it’s worth, we have lots of thoughts about how to extend, refine, and reimagine the MWP. We lay out several of them here. In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach. Funding is, and long has been, the bottleneck—which explains why there haven’t been many public updates about the MWP since we finished it (apart from the book, which refines the methodology in notable ways). But if people are interested in supporting these or related projects, we’d be very glad to work on them.
I’ll just add: I’ve long thought that one important criticism of the MWP is that it’s badly named. We don’t actually give “moral weights,” at least if that phrase is understood as “all things considered assessments of the importance of benefiting some animals relative to others” (whether human or nonhuman). Instead, we give estimates of the differences in the possible intensities of valenced states across species—which only double as moral weights given lots of contentious assumptions.
All things considered assessments may be possible. But if we want them, we need to grapple with a huge number of uncertainties, including uncertainties over theories of welfare, operationalizations of theories of welfare, approaches to handling data gaps, normative theories, and much else besides. The full project is enormous and, in my view, is only feasible if tackled collaboratively. So, while I understand the call for independent teams, I’d much prefer a consortium of researchers trying to make progress together.
Hi Bob.
I do not see much value in improving the estimates for the probability of sentience presented in your book. I believe it is more important to decrease the uncertainty in the (expected hedonistic) welfare per unit time conditional on sentience, which I think is much larger than that in the probability of sentience.
I also worry about analysing just the probability of consciousness/sentience due to this not being independent from the welfare per unit time conditional on consciousness/sentience. Less strict operationalisations of the probability of consciousness/sentience will tend to result in a lower welfare per unit time conditional on consciousness/sentience.
Have large funders explained their lack of interest? If not, what is your best guess?
Sorry, Vasco; we weren’t clear. The idea is to use the DCM as a blueprint for aggregating the data we collected in the MWP, not to produce new estimates of sentience. The focus would be on capacity for welfare.
I see. Thanks for clarifying.
Thanks @Bob Fischer those are all good points.
I agree its very difficult and probably impossible to “get right” with a small team of researchers, but I still think (as many people have commented) that there would be great value in truly independent work on this. I think there is too much upside to independent work here to continue with only collaboration, even if reductoin in quality might be a downside.
If work continued with only collaboration, I think the Gravity Well effect mentioned by would be hard to avoid, credibility would be reduced, and that new researchers might find it hard to flesh out new methodology and ideas and in some cases be adversarial if RP’s team was involved from the beginning of any new research.
Of course then collaboration and conversation would come later.
@KarolinaSarek🔸, to which extent is the Animal Welfare Fund (AWF) open to funding research decreasing the uncertainty in comparisons of (expected hedonistic) welfare across species? @LewisBollard, how about Coefficient Giving (CG)? I think much more of that research is needed to conclude which interventions robustly increase welfare. I do not know about any intervention which robustly increases welfare due to potentially dominant uncertain effects on soil animals and microorganisms. Even neglecting these, I believe there is lots of room to change funding decisions as a result of more of that research. I understand Ambitious Impact (AIM), Animal Charity Evaluators (ACE), maybe AWF, and CG sometimes for robustness checks use the (expected) welfare ranges Rethink Priorities (RP) initially presented, or the ones in Bob’s book as if they are within a factor of 10 of the right estimates (such that these could 10 % to 10 times as large). However, I can easily see much larger differences. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say one reasonable best guess (though not the only one) is 10^-6, the ratio between the number of neurons of shrimps and humans.
@eleanor mcaree, to which extent is ACE’s Movement Grants program open to funding research decreasing the uncertainty in interspecies welfare comparisons? @Jesse Marks, how about The Navigation Fund (TNF)? @Zoë Sigle 🔹, how about Senterra Funders? @JamesÖz 🔸, how about Mobius and the Strategic Animal Funding Circle (SAFC)? You can check my comment above for context about why I think such research would be valuable.
Hi Vasco, Senterra Funders’ FAQ should answer your questions.
Thanks, Zoë. I see funders are the ones deciding what to fund, and that you only provide advice if they so wish, as explained below. What if funders ask you for advice on which species to support? Do you base your advice on the welfare ranges presented in Bob’s book? Have you considered recommending research on welfare comparisons across species to such funders, such as the projects in RP’s research agenda on valuing impacts across species?