The problem with Pascal’s Wager is that it ignores reversed scenarios that would offset it: e.g. there could as well be a god that would punish you for believing in God without having good evidence.
I don’t think this would be applicable to our scenario. Whether we choose to help the human or the animals, there will always be uncertainty about the (long-term) effects of our intervention, but the intervention would ideally be researched well enough for us to have confidence that its expected value is robustly positive.
Sure there is a small chance, but the question is: what can we do about it and will the opportunity cost be justifiable? And for the same reason that Pascal’s Wager fails, we can’t just arbitrarily say “doing this may reduce suffering” and think it justifies the action, since the reversal “doing this may increase suffering” plausibly offsets it.