I recently completed a PhD exploring the implications of wild animal suffering for environmental management. You can read my research here: https://scholar.google.ch/citations?user=9gSjtY4AAAAJ&hl=en&oi=ao
I am now considering options in AI ethics, governance, or the intersection of AI and animal welfare.
Tristan Katz
Hey, totally agree that the sentiment isn’t always helpful—it’s one that motivates me, but it’s definitely not what I say in my elevator pitch to encourage others to donate!
Why I donate—and why I’m committing to a dynamic pledge
I hope I’m not taking this too seriously, but the examples Bob gave are of looking with concern for the bugs’ welfare. Entomologists presumably do that more than your average Instagram user because they actually study and handle the bugs. Others might just look at photos of bugs the same way they look at photos of plants or landscapes.
I feel a bit confused by this strategy. The normal idea of voting is to express your preference, such that the outcome reflects what the majority prefers.
If people treat it rather as an opportunity to communicate to others, that seems likely to distort the outcome. In regular political elections I’m ok with that, but in this context where voters are voting altruistically, I’m less sure.
I’m also confused because the act of writing here is a signal, and probably a clearer one! Could you not have done that and voted for who you genuinely think should be ‘elected’?
I agree. Maybe we can just say that veganism focuses on the wrong behavior? In addition to donating, I think voting can be more important than your individual diet. Many animal advocacy or rights organizations seem to recognize this, and refer to animal advocates or animal rights advocates to be more inclusive. They certainly do this for events where they seek to attract a lot of people, like animal rights marches. But for sure, veganism continues to be emphasized too much.
I also agree that the definition given in the post doesn’t reflect popular usage, which is probably something like:
Vegans avoid causing harm to animals, and so avoid purchasing or consuming animal products.
This doesn’t seem particularly maximizing. The first part reflects the moral commitment, and yes it’s possible to be perfectionist about it, but it’s not fundamentally. The second part demands evidence of that moral commitment, and it’s also far from maximizing, since not consuming animal products is very achievable for most people. So, as long as this definition is interpreted in a reasonable way, it doesn’t seem particularly maximalist.
I also have the impression that these fellowships are getting more competitive each year. Do you share that perspective? If so, that would require a further adjustment to the calculation.
As the demotivated person you referenced, I appreciate this! But I think the case for de-motivation is a bit stronger than presented in this calculation, so I’ll try to steelman it.
First of all, consider the framing: you’ve assumed that if I don’t get this fellowship I’ll continue with the job I already have, and that if I don’t apply I’ll just have leisure time. In reality many applicants will be looking for a new opportunity, and if they don’t get this they’ll probably take something else. They might (as I am) be making as many applications as they have energy for, such that the relevant counterfactual is another application, rather than free time.
Here are some more specific points:I agree that the base rate is a poor guide to your personal probability. But I would expect that the range of probabilities across applicants is likely to be skewed, such that a number of applicants each have a very significant chance, while the majority of others have next to no chance at all. This would be the case if the hiring team are willing to accept only exceptional candidates, and there are more exceptional candidates than positions, yet the majority of applicants are not exceptional. I expect this skew to be stronger the more applicants there are.
If you are going to consider the intangible value of the fellowship, then you also need to consider the intangible value of the counterfactual. This will either be that of your current work, or another job you might take (which is hard to estimate, but you could guess at it).
Similarly to the above, if you’re going to consider the information value of applying, then you also ought to consider the information value of applying for whatever else you might have spent the time on (if that’s what you would have done). Since many fellowships receive thousands of applications, I presume that most applications get not feedback (as in my own experience). The information value is therefore likely to be higher for opportunities with higher chances of success. Interviews give a greater information value still, so this effect is quite strong. This might even be the main reason why I prefer not to apply for positions where my chance seems so slim.
A related advantage of focusing on higher-probability positions is that you might be lucky enough to receive multiple offers around the same time. That lets you compare options after learning more about them during the process, and choose the one that seems like the best personal fit. If most of your applications are very low-probability, you’re more likely either to accept the first offer you get, or to pass on a good offer in the hope of a better one later, simply because the relevant information arrives at very different times.
Lastly, there is a psychological cost to firing out many low-chance applications. Rejections are, after all, demotivating. Getting interviews or offers is highly motivating, even if you end up declining the offers.
This leads to a much much more complicated equation! I asked GPT, to try to link this all together, and well… I’ll just link its answer here.
So I remain pretty unsure as to whether it’s worth it for myself personally, but I might put in an application anyway :)
Yea, I’m a little surprised (based on the interactions I’ve had with EAs about the topic). Would be nice if more people commented, to see whether it’s real disagreement or just a desire for more rigor.
Regarding (2), do you think sentience gradually increases, or there’s a cutoff? My own memory is of having very intense joy and pain/sadness as a small child, maybe more than now. So while I put less credence in the sentience of young animals, my assumption is that if sentient they could have intense feelings.
I want to push back (3) (larger animals having ‘great’ lives). I’m not sure if you meant that: but take any pet cat or dog, and then make them hungry, often. Make them cold, make them occasionally very afraid, and give them parasites and untreated injuries. Make them live only 1⁄4 of their full lifespan. Now even if that’s worth living, surely it isn’t a ‘great’ life!
I think, by cross-posting this here, you’re largely preaching to the choir—and I’m no exception. I agree that rewilding is an ethically questionable practice and that we should focus a lot (or most) of our attention on invertebrates. I especially like the argument against viewing conservation as necessary for the future survival of life—this is so common and poorly justified.
That said, I think this post could do a much better job of steelmanning the opposition. The assumption that welfare in nature is net-negative is far from obvious: good arguments to the contrary have been given here. There’s also this article, although I think its assumptions are more questionable. To illustrate, consider:The most numerous wild animals, which your argument focuses on, might not be sentient at all. The jury is still out.
If they are sentient, they might not be so during that first week of life when they’re most likely to die.
Even if they were, since none of us have lived a week and then died as an insect, we should be careful about assuming what that experience is like: doing things and seeing the world for the first time might be positive experiences (think about how a toddler seems to enjoy many very ordinary things); they might suffer less than humans when dying; but even humans often report numbness or even positive near-death experiences (this is all explained better in the previously linked post).
Even if their deaths are very painful, they’re likely to occur over minutes, whereas there are 168 hours in a week. That this isn’t worth it is non-obvious. While we often say that we wouldn’t trade a painful experience for something good, we normally say that while expecting to keep living. If we knew we would die in a week, we might be willing to endure a few minutes of pain for another week of precious life.
Note, I’m not saying we can’t or shouldn’t make such judgements. I’m just saying we shouldn’t be quick to make them, especially when we’re talking about whether thousands of other animals should live or not. We should steelman the view that wild animals have good lives.
Ok that’s fair, I guess ‘appalled’ was just my own personal reaction as someone who’d hoped to have a chance at this. And I agree, if I put my own interests aside, it is good that a lot of people are going for this. And it’s good if I’m not needed.
By ‘oversaturated’ I didn’t mean that there’s no need for greater talent, but rather that many people will be wasting their time trying to pursue this path. Even if many of the 7500 applicants weren’t aligned with FIG’s mission… I’m sure the number who are is still very high. With such slim chances of getting a position, I don’t think it makes sense to recommend people pursue this path unless they’re really a good fit or highly talented.
The number of people living in extreme poverty is projected to increase
Tristan Katz’s Quick takes
I felt very discouraged when I heard that there were over 1300 applications for the Gov AI winter fellowship. But now I’m frankly appalled to hear that there were over 7500 applications for the 2025 FIG Winter Fellowship.
Should we officially declare that AI governance is oversaturated, and not recommend this career path except for the ultra-talented?
As a (wild) animal welfare person, I am disappointed to see this. Your comment was thoughtful and well-intentioned.
It doesn’t apply here, but in general I expect animal welfare people are more likely to disapprove of certain views, or take more of a combative attitude to public debates, because so much of normal discourse sneaks in speciesist assumptions and is actively harmful to animals. But I don’t think that’s the explanation here—I largely agree with your comment.
To respond to your original comment, I think with a bit of creativity you will be able to find politically tractable interventions. For example, people tend to view humane management of animals in cities quite positively. There’s also a growing movement for compassionate conservation. It’s more focused on doing no harm than actively helping wild animals, but at least it is a movement towards thinking about the welfare of wild animals. I do think that there will often be a tradeoff between effectiveness and political tractability though, and it may be worth pursuing sub-optimal interventions for a while in order to gain greater political momentum towards helping wild animals.
You’ve said you’re in favour of slowing/pausing, yet your post focuses on ‘making AI go well’ rather than on pausing. I think most EAs would assign a significant probability that near-term AGI goes very badly—with many literally thinking that doom is the default outcome.
If that’s even a significant possibility, then isn’t pausing/slowing down the best thing to do no matter what? Why be optimistic that we can “make AGI go well” and pessimistic that we can pause or slow AI development for long enough?
I enjoyed this post a lot while reading it, but after reflecting (and discussing with my local group) I feel more unsure. Consider that can ask if we should encourage ‘heroic responsibility’ and try to foster this kind of radical, positive altruism at three different levels:
1. Personally, as an individual
2. Within EA
3. Within society as a whole.
The post seems to argue for all three. It talks specifically about the need for a cultural shift. I feel very convinced of (1) (I’d value this highly for myself), I’m less convinced of (2), and I feel quite unconvinced of (3).
Heroic responsibility & burnout
I think it’s quite clear that it would be beneficial if this way of thinking became widespread in EA and society at large. But it’s less clear if that’s a realistic expectation. I actually see a lot of risks to encouraging heroic responsibility within EA; EA pivoted away from heroic responsibility toward more toned-down messaging about doing good quite intentionally. As kuhanj notes, without he positive, enjoying-the-process attitude that’s argued for in part 2, there’s the risk that heroic responsibility leads to burnout. And it seems to me that enjoying the process is actually not always that easy: meditation just isn’t for everyone, I’ve meditated for a number of years and can’t say it transformed me. I would be happy to see workshops on this at EA retreats, but it doesn’t seem worth it to ask all EAs to spend large amounts of time on this when we’re not sure if it’ll work, and the current strategy of simply not asking people to take on all the world’s problems also works ok. For people new to EA, the movement might also be very off-putting if it seemed to ask this much of you.Is heroic responsibility learned or innate?
I also think that heroic responsibility might be determined more by genes or early childhood experiences than anything else. The examples of heroes don’t seem to be of people who arrived there because of some deep insight—rather, these are people who were motivated by justice to begin with. I know for myself, I am more motivated in this way than my siblings now but I was also more motivated when I was 10 years old. Resources spent trying to transform people in this way might be wasted, and might be better spent by trying to encourage people who already have this disposition to join EA.
I enjoyed the original post, but found it somewhat hard to identify the key points and how they were connected. This summary makes that much clearer!
This is awesome. I really liked how you considered both short term and long term, clear and diffuse effects, and noted how they changed your confidence.
It seems like this should be highly valuable for:
People working in animal advocacy
For donors who want to make their own judgements about which interventions to support
For other researchers who want to build on this, or to identify how different assumptions lead to support for different causes.
I agree with @david_reinstein that it would be nice to see this made into a more visually polished and navigable form, but in terms of the content itself I found it very easy to understand the reasoning and assessments.
My intention would be to gradually increase. So in the past I was earning just slightly above the median, but gave 15%. In general I think it’s good to have an idea of what income you’re comfortable with, and then increase donations significantly as you pass that point. But I set the bar really high here just because I’m aware that my perception of what is enough might change in different life-stages.
To be honest I think my model is super crude and probably not ideal, I would really like to see other models like this!