I’m not an expert on moral weights research itself, but approaching this rationally, I’m strongly in favour of commissioning an independent, methodologically distinct reassessment of moral weights—precisely because a single, highly-cited study can become an invisible “gravity well” for the whole field.
Two design suggestions that echo robustness principles in other scientific domains:
Build in structured scepticism. Even a small team can add value if its members are explicitly chosen for diverse priors, including at least one (ideally several) researchers who are publicly on record as cautious about high animal weights. The goal is not to “dilute” the cause, but to surface hidden assumptions and push every parameter through an adversarial filter.
Consider parallel, blind teams. A light-weight version of adversarial collaboration: one sub-team starts from a welfare-maximising animal-advocacy stance, another from a welfare-sceptical stance. Each produces its own model and headline numbers under pre-registered methods; then the groups reconcile differences. Where all three sets of numbers (Team A, Team B, RP) converge, we gain confidence. Where they diverge, at least we know which assumptions drive the spread.
The result doesn’t have to dethrone RP; even showing that key conclusions are insensitive to modelling choices (or, conversely, highly sensitive) would be valuable decision information for funders.
In other words: additional estimates may not be “better” in isolation, but they increase our collective confidence interval—and for something as consequential as cross-species moral weights, that’s well worth the cost.
Thanks for fleshing this out more, both of your design suggestions make a lot of sense to me. You also stated one of my major concerns far better than I did.
“A single, highly-cited study can become an invisible “gravity well” for the whole field.”
Hi Nick, just sent you a brief DM about a “stress-test” idea for the moral-weight “gravity well”. Would appreciate any steer on who might sanity-check it when you have a moment. Thanks!
I’m not an expert on moral weights research itself, but approaching this rationally, I’m strongly in favour of commissioning an independent, methodologically distinct reassessment of moral weights—precisely because a single, highly-cited study can become an invisible “gravity well” for the whole field.
Two design suggestions that echo robustness principles in other scientific domains:
Build in structured scepticism.
Even a small team can add value if its members are explicitly chosen for diverse priors, including at least one (ideally several) researchers who are publicly on record as cautious about high animal weights. The goal is not to “dilute” the cause, but to surface hidden assumptions and push every parameter through an adversarial filter.
Consider parallel, blind teams.
A light-weight version of adversarial collaboration: one sub-team starts from a welfare-maximising animal-advocacy stance, another from a welfare-sceptical stance. Each produces its own model and headline numbers under pre-registered methods; then the groups reconcile differences. Where all three sets of numbers (Team A, Team B, RP) converge, we gain confidence. Where they diverge, at least we know which assumptions drive the spread.
The result doesn’t have to dethrone RP; even showing that key conclusions are insensitive to modelling choices (or, conversely, highly sensitive) would be valuable decision information for funders.
In other words: additional estimates may not be “better” in isolation, but they increase our collective confidence interval—and for something as consequential as cross-species moral weights, that’s well worth the cost.
Thanks for fleshing this out more, both of your design suggestions make a lot of sense to me. You also stated one of my major concerns far better than I did.
“A single, highly-cited study can become an invisible “gravity well” for the whole field.”
Hi Nick, just sent you a brief DM about a “stress-test” idea for the moral-weight “gravity well”. Would appreciate any steer on who might sanity-check it when you have a moment. Thanks!