Thanks for this excellent post! The distinction between ‘puntable’ and ‘less puntable’ ideas seems like a really helpful way for advocates to think about tactic prioritisation.
On the point about AI-enabled modelling of wild animal welfare and implications of different interventions: are there any existing promising examples of this? The one example I’ve come across is the model described in the paper ‘Predicting predator–prey interactions in terrestrial endotherms using random forest’ but the predictions seem pretty basic and not necessarily any better than non-AI modelling.
Also, why did you decide that TAI’s role in ‘infrastructure needs’ and ‘getting the “academic stamp of approval”’ weren’t useful to think about?
Hi Max, thanks for the positive feedback and for the question.
I will ask our research team if they are aware of any specific papers I could point to; several of them are more familiar with this landscape than I am. My general idea that AI-enabled modeling would be beneficial is more from the very basic guess that given that AI is pretty good at coding, stuff that relies on coding might get a lot better if we had TAI. If that’s right, then even if we don’t see currently great examples of modeling work being useful now, it could nevertheless get a lot better sooner than we think.
Thanks for bringing up the usefulness sentence, I think I could have been a lot clearer there and will revise it in future versions. I think I mainly meant that I was less confident about what TAI would mean for infrastructure and academic influence, and so any possible implications for WAW strategy would be more tentative. However, thinking about it a bit more now, I think the two cases are a bit different.
For infrastructure: In part, I down-weighted this issue because I find the idea that the manufacturing explosion will allow every scientist to have a lab in their house less probable, at least on short timelines, than software-based takeoffs. But also, and perhaps more importantly, I generally think that on my list of reasons to do science within academia, 1 and 3 are stronger reasons than 2. Infrastructure can be solved with more money, while the others can’t. So even if thinking about TAI caused me to throw out the infrastructure consideration, I might still choose to focus on growing WAWS inside academia, and that makes figuring out exactly what TAI means for infrastructure less useful for strategy.
For “academic stamp of approval”: I think I probably just shouldn’t have mentioned this here, because I do end up talking about legitimacy in the piece quite a bit. But here’s an attempt at articulating more clearly what I was getting at:
Assume TAI makes academic legitimacy less important after TAI arrives.
You still want decision-makers to care about wild animal welfare before TAI arrives, so that they use it well etc.
Most decision-makers don’t know much about WAW now, and one of the main pathways now that wildlife decision-makers become familiar with a new issue is through academia.
So, academic legitimacy is still useful in the interim.
And, if academic legitimacy is still important after TAI arrives, you also want to work on academic legitimacy now.
So, it isn’t worth spending too much time thinking about how TAI will influence academic legitimacy, because you’d do the same thing either way.
That said, I find this argument suspiciously convenient, given that as an academic, of course I’m inclined to think academic legitimacy is important. This is definitely an area where I’m interested in getting more perspectives. At minimum, taking TAI seriously suggests to me that you should diversify the types of legitimacy you try to build, to better prepare for uncertainty.
Just chiming in to try to answer the following question:
On the point about AI-enabled modelling of wild animal welfare and implications of different interventions: are there any existing promising examples of this?
I don’t think I’ve found any slam-dunk examples where currently available tools are used to predict *outcomes* of conservation interventions / population management decisions.
However, I think there are lots of opportunities to learn from past efforts with the help of AI, and inform decisions around population management using AI, seen from the lens of conservation biology at least (where research in conservation biology is likely indirectly helpful for understanding wild animal welfare as well). See for example van Houtan et al. (2020), Sathishkumar et al. (2023), van Oosterhout (2023), Wu et al. (2023), and Agmata & Guðmundsson (2025). Even if AI wasn’t specifically and explicitly used to predict the outcomes of population management decisions (I think it is very likely that we will be able to use AI for that eventually), you may still reap benefits from simply being able to more accurately target the right place at the right time.
Thanks for this excellent post! The distinction between ‘puntable’ and ‘less puntable’ ideas seems like a really helpful way for advocates to think about tactic prioritisation.
On the point about AI-enabled modelling of wild animal welfare and implications of different interventions: are there any existing promising examples of this? The one example I’ve come across is the model described in the paper ‘Predicting predator–prey interactions in terrestrial endotherms using random forest’ but the predictions seem pretty basic and not necessarily any better than non-AI modelling.
Also, why did you decide that TAI’s role in ‘infrastructure needs’ and ‘getting the “academic stamp of approval”’ weren’t useful to think about?
Hi Max, thanks for the positive feedback and for the question.
I will ask our research team if they are aware of any specific papers I could point to; several of them are more familiar with this landscape than I am. My general idea that AI-enabled modeling would be beneficial is more from the very basic guess that given that AI is pretty good at coding, stuff that relies on coding might get a lot better if we had TAI. If that’s right, then even if we don’t see currently great examples of modeling work being useful now, it could nevertheless get a lot better sooner than we think.
Thanks for bringing up the usefulness sentence, I think I could have been a lot clearer there and will revise it in future versions. I think I mainly meant that I was less confident about what TAI would mean for infrastructure and academic influence, and so any possible implications for WAW strategy would be more tentative. However, thinking about it a bit more now, I think the two cases are a bit different.
For infrastructure: In part, I down-weighted this issue because I find the idea that the manufacturing explosion will allow every scientist to have a lab in their house less probable, at least on short timelines, than software-based takeoffs. But also, and perhaps more importantly, I generally think that on my list of reasons to do science within academia, 1 and 3 are stronger reasons than 2. Infrastructure can be solved with more money, while the others can’t. So even if thinking about TAI caused me to throw out the infrastructure consideration, I might still choose to focus on growing WAWS inside academia, and that makes figuring out exactly what TAI means for infrastructure less useful for strategy.
For “academic stamp of approval”: I think I probably just shouldn’t have mentioned this here, because I do end up talking about legitimacy in the piece quite a bit. But here’s an attempt at articulating more clearly what I was getting at:
Assume TAI makes academic legitimacy less important after TAI arrives.
You still want decision-makers to care about wild animal welfare before TAI arrives, so that they use it well etc.
Most decision-makers don’t know much about WAW now, and one of the main pathways now that wildlife decision-makers become familiar with a new issue is through academia.
So, academic legitimacy is still useful in the interim.
And, if academic legitimacy is still important after TAI arrives, you also want to work on academic legitimacy now.
So, it isn’t worth spending too much time thinking about how TAI will influence academic legitimacy, because you’d do the same thing either way.
That said, I find this argument suspiciously convenient, given that as an academic, of course I’m inclined to think academic legitimacy is important. This is definitely an area where I’m interested in getting more perspectives. At minimum, taking TAI seriously suggests to me that you should diversify the types of legitimacy you try to build, to better prepare for uncertainty.
Just chiming in to try to answer the following question:
I don’t think I’ve found any slam-dunk examples where currently available tools are used to predict *outcomes* of conservation interventions / population management decisions.
However, I think there are lots of opportunities to learn from past efforts with the help of AI, and inform decisions around population management using AI, seen from the lens of conservation biology at least (where research in conservation biology is likely indirectly helpful for understanding wild animal welfare as well). See for example van Houtan et al. (2020), Sathishkumar et al. (2023), van Oosterhout (2023), Wu et al. (2023), and Agmata & Guðmundsson (2025). Even if AI wasn’t specifically and explicitly used to predict the outcomes of population management decisions (I think it is very likely that we will be able to use AI for that eventually), you may still reap benefits from simply being able to more accurately target the right place at the right time.
In terms of examples of the use of AI that seems directly relevant for wild animal welfare, see the following papers (Bierlich et al. 2024, Rast et al. 2024, and Murphy et al. 2025).