[6 of 6] “What would make you know it’s not working?”
There are three levels at which you might answer this question:
(1) How would you know if field-building was the wrong strategy?
If AI changes the nature of research so much that academia becomes irrelevant.
This seems the most plausible way our field-building strategy could be wrong, but so far we haven’t found a strategy that seems more robust to the huge uncertainties created by AI. If you have an idea for an alternative strategy that seems reasonably likely to be much more robust than field-building, please let us know, because this is an active area of strategic research for us.
If someone finds a way to reduce suffering for ~trillions of wild animals that doesn’t require more research/technology (either to implement or to evaluate), that can be shown with high confidence to be net-positive even after accounting for effects on non-target species, and that seems likely to shift social norms toward caring explicitly about wild animal welfare over the long run.
If there’s massive social change in favor of wild animal welfare—something like the way pollution and habitat destruction rapidly became issues of wide concern in the US between 1962 (when Rachel Carson published Silent Spring) and 1970 (when Richard Nixon founded the Environmental Protection Agency and Gaylord Nelson organized the first Earth Day). If that happened, it would be reasonable to expect an academic field to emerge somewhat organically, the way conservation biology did in the 1980s.
(2) How would you know if WAI’s efforts weren’t on track to succeed at field-building?
This is the failure mode we invest the most in monitoring. We’re tracking a wide range of metrics related to field growth and indicators related to our causal impact. We’re still building up that system (collecting historical data, deciding which metrics are most relevant, etc.), but we hope to have an easily shareable dashboard sometime early in 2026. For now, I suggest looking at the supporting documents in Animal Charity Evaluators’ 2025 recommendation of WAI to get a more detailed look at how we approach monitoring and evaluation.
(3) How would you know if the field wasn’t on track to lead to real change for animals?
We think it this will be pretty hard to get data on before the field is more established (i.e., at least another 5 years), because as long as the community is still evolving, its present form isn’t necessarily representative of its long-term state.
For now, we’re looking for positive indications that our priorities are increasingly represented within the field: valuing highly abundant populations/taxa, researching invertebrates, considering counterfactuals, being open to human interference in nature, accounting for the possibility of net-negative lives, etc.
Negative signs might include indications that some sort of sub-optimal culture is dominating and self-perpetuating (particularly concerning if the culture represents a relatively small part of society at large, because that would suggest our field is actively selecting for it, rather than failing to filter it out). For example, wildlife management science was heavily populated by hunters and anglers until recently; veterinary medicine is primarily funded by animal ag interests and continues to have a strong pro-farming culture. (I don’t know to what extent these particular trends in academic subcultures exist outside the US.)
If AI changes the nature of research so much that academia becomes irrelevant.
Not very relatedly, in terms of cause prioritisation, I wonder whether it makes sense to prioritise building capacity for increasing digital welfare or that of soil animals. Do you have any thoughts on this? I estimate changing, not (robustly) increasing, the welfare of soil animals will remain more cost-effective than changing digital welfare for at least the next few decades.
(2) How would you know if WAI’s efforts weren’t on track to succeed at field-building?
Do you have any suggestions for analyses I could do to inform this?
[6 of 6] “What would make you know it’s not working?”
There are three levels at which you might answer this question:
(1) How would you know if field-building was the wrong strategy?
If AI changes the nature of research so much that academia becomes irrelevant.
This seems the most plausible way our field-building strategy could be wrong, but so far we haven’t found a strategy that seems more robust to the huge uncertainties created by AI. If you have an idea for an alternative strategy that seems reasonably likely to be much more robust than field-building, please let us know, because this is an active area of strategic research for us.
If someone finds a way to reduce suffering for ~trillions of wild animals that doesn’t require more research/technology (either to implement or to evaluate), that can be shown with high confidence to be net-positive even after accounting for effects on non-target species, and that seems likely to shift social norms toward caring explicitly about wild animal welfare over the long run.
If there’s massive social change in favor of wild animal welfare—something like the way pollution and habitat destruction rapidly became issues of wide concern in the US between 1962 (when Rachel Carson published Silent Spring) and 1970 (when Richard Nixon founded the Environmental Protection Agency and Gaylord Nelson organized the first Earth Day). If that happened, it would be reasonable to expect an academic field to emerge somewhat organically, the way conservation biology did in the 1980s.
(2) How would you know if WAI’s efforts weren’t on track to succeed at field-building?
This is the failure mode we invest the most in monitoring. We’re tracking a wide range of metrics related to field growth and indicators related to our causal impact. We’re still building up that system (collecting historical data, deciding which metrics are most relevant, etc.), but we hope to have an easily shareable dashboard sometime early in 2026. For now, I suggest looking at the supporting documents in Animal Charity Evaluators’ 2025 recommendation of WAI to get a more detailed look at how we approach monitoring and evaluation.
(3) How would you know if the field wasn’t on track to lead to real change for animals?
We think it this will be pretty hard to get data on before the field is more established (i.e., at least another 5 years), because as long as the community is still evolving, its present form isn’t necessarily representative of its long-term state.
For now, we’re looking for positive indications that our priorities are increasingly represented within the field: valuing highly abundant populations/taxa, researching invertebrates, considering counterfactuals, being open to human interference in nature, accounting for the possibility of net-negative lives, etc.
Negative signs might include indications that some sort of sub-optimal culture is dominating and self-perpetuating (particularly concerning if the culture represents a relatively small part of society at large, because that would suggest our field is actively selecting for it, rather than failing to filter it out). For example, wildlife management science was heavily populated by hunters and anglers until recently; veterinary medicine is primarily funded by animal ag interests and continues to have a strong pro-farming culture. (I don’t know to what extent these particular trends in academic subcultures exist outside the US.)
Thanks for the clarifying comment, Cam!
Not very relatedly, in terms of cause prioritisation, I wonder whether it makes sense to prioritise building capacity for increasing digital welfare or that of soil animals. Do you have any thoughts on this? I estimate changing, not (robustly) increasing, the welfare of soil animals will remain more cost-effective than changing digital welfare for at least the next few decades.
Do you have any suggestions for analyses I could do to inform this?