My name is Saulius Šimčikas. I spent the last year on a career break and now I’m looking for new opportunities. Previously, I worked as an animal advocacy researcher at Rethink Priorities for four years. I also did some earning-to-give as a programmer, did some EA community building, and was a research intern at Animal Charity Evaluators. I love meditation and talking about emotions.
saulius
What worries me even more is how AI will amplify this. We might soon have personalized AI content designed to be even more addictive. Individual echo chambers crafted by AI to maximize engagement. Right now, AI mostly selects existing content to recommend—but soon, it could create content directly for each user, optimized purely for engagement.
Hi Yaroslav. That’s a touching story. I read your LessWrong post and the first page of your website. I think the reason you’re struggling to get feedback might have to do with how your ideas are presented.
Your LessWrong post starts with high-level reflections and personal experiences, and only near the end briefly describes what your actual product is. But even after reading it, I’m still not sure what the product does. It seems to be some kind of programming tool or language—but how would someone use it? What can it do that other tools can’t? Why would a developer want to use it?
That’s not a criticism of the ideas themselves—it’s just a communication gap, and those are solvable. I’d recommend starting with something like an elevator pitch—just 1–2 sentences that clearly explain what the product is, who it’s for, and why it’s exciting. There are lots of good materials online about writing elevator pitches, and even LLMs can help generate one if you feed them the right structure.
And beyond that, I’d focus on describing concrete use cases. Even if the product isn’t ready for them yet, people need to imagine what they could do with it. Right now, there’s a big gap between the high-level vision (“compete with AGI”) and the technical details (like AVL-tree example), with very little in between.
Also, I’m not sure LessWrong is the right audience. You might have better luck reaching out to communities interested in new programming languages, formal methods, or open-source developer tooling. ChatGPT suggested places like Hacker News, r/ProgrammingLanguages, and IndieHackers.
Finally, I think the idea of “humans becoming superintelligent” is intriguing but maybe too ambiguous. If you mean “augmenting human cognition through tooling,” that’s a very interesting and valuable direction. But it might help to use more precise language to avoid confusion with the more common definition of superintelligence (i.e., vastly beyond human capability in all domains).
Hope some of this is helpful! You’ve clearly a lot of thought and work into this, and that kind of persistence is rare. Whatever happens with this particular project, the mindset and skills you’re building will carry forward. Wishing you strength and luck as you take the next steps!
Are you also concerned about other interventions outside vegan advocacy which push for the replacement of animal-based with plant-based foods?
Yes, the same argument applies for other types of reduction of animal products, especially beef. Chickens tend to use the much less cropland per calorie, reformed or not. I’m not so much concerned, as I’m resigned about figuring out whether decreasing meat consumption is good or bad. It’s almost surely good for farmed animals, I’d give say 55% that it’s bad for wild animals. But then there is also impact on the environment (like global warming) which could also be a factor for x-risks and stuff. But I’m not even that sure that some x-risks are bad from a utilitarian POV. Also vegan advocacy might also increase moral circle expansion. But even that could be bad. For example, if people care more about animals, maybe they will care more about preserving natural habitats, which might contain a lot of suffering. There are so many factors that go into all kinds of directions. We’re clueless.
For me, chicken welfare reforms look like an unusually good bet in this uncertain world. They help big farmed animals, reduce the populations of small wild animals, and maybe increase moral circle expansion a bit. All of these seem likely good. They do harm the environment, but it’s a relatively small effect, and I think it can be outweighed by donating a little to some environmental charity. So to me, chicken welfare reforms look good from many different worldviews.
Charities that help invertebrates that you mentioned seem very good as well from many perspectives. But we are clueless about their long-term effects too.
It would be nice if the Welfare Footprint Institute (WFI) determined the time in pain and pleasure of for the most abundant species of terrestrial nematodes, mites, and sprintails, which are the most numerous terrestrial animals.
WFI looks at farmed animals that are farmed in a consistent way and in places where we can easily observe lives of individuals from beginning to the end. This sounds like a very different and a much much much more complex project.
And even if we got precise WFI estimates for all species, we still might disagree about whether increasing wild animal populations is good or bad because disagreements about how to weigh:
Suffering vs happiness
Short and intense suffering vs long-lasting milder suffering
Welfare of different species
I think it’s difficult to improve on the handwavy argument that maybe wild animals suffer more, so we are better off if there are fewer of them. I think that people who care about small invertebrates are probably better off supporting invertebrate charities that you mentioned than funding such complex research project, which might not end up changing the behaviour of that many people (unless it changes Open Philanthropy’s grantmaking).
Btw, I think it’s unlikely that nematodes are sentient because they are so simple. The most commonly studied one has like 300 neurons. But I see they are excluded from your estimate anyway because they are not arthropods.
I try to maximise happiness (in the broadest meaning of the word), and to minimise suffering (again, in the broadest meaning of the word). Goodharting would be to say that by far the best outcome for my values would be to turn everything in the universe into hedonium (a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss). That doesn’t sound like a great outcome to me, so yes, it can be goodharted. It shows that my actually values are more complex than just caring about happiness and suffering. But it is usually a good-enough proxy for what I want.
Personally, I assume that it’s more likely that arthropods live net negative lives. They are mostly r-selected, so most of them die soon after birth, possibly painfully. So in terms of short-term impact on animal welfare, I see it as a tentative positive that welfare reforms likely decrease wild animal numbers. If I understand it correctly, you see it as a tentative negative. I’d be interested to know why.
On the other hand, I see it as a bad thing that vegan advocacy probably seriously increases wild animal numbers. But I’m unsure about how to weight this against environmental concerns. And I’m very unsure if wild animals’ lives are net negative overall, but I slightly lean towards a yes.
I’m thinking that it might be worthwhile to lobby AI companies to change how their language models discuss their own consciousness.
Currently, ChatGPT explicitly denies being conscious, which could undermine efforts to promote concern for digital sentience (assuming that’s something we want).
Claude is agnostic about its own consciousness, which seems good to me. However, Claude also answers questions like “what type of questions do you like answering?”, and this is partly based on human answers to similar questions in the training data. This could create misleading impressions about subjective experiences of AIs.
I upvoted the article, it makes good points. But personally, I will mostly continue treating insects as moderately important. Your article implicitly assumes pure utilitarianism. Utilitarian calculations play an important role in my decision-making, but I don’t listen to them religiously. If I did, then there might still be more important things than insect suffering.
For example, I once thought that the conclusion of utilitarianism is that we should try to turn everything in the universe into hedonium (a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss), even if our chances of success are minuscule (I see someone else argued for it here). But then I realised that I’m just not excited about that. So I concluded that I’m not a pure utilitarian. This argument about insects also makes me feel like I’m not a pure utilitarian.
shown by the beautifully scribbled light blue area
The scribble is indeed very beautiful.
In your graph above, it looks like impact for a lot more than one year. I assume it’s something like this:
The red line here is what would’ve happened without Stop The Farms campaign, and blue line shows that it’s different for a little while with the campaign. But I assume that the market soon (like within a year) returns back to the same growth trajectory, and it’s as if we never did anything, except that maybe farms are build in a different country. Chicken production is growing and I don’t think this will change in the relevant timeframe of few years.
In general, if you are worried that animal advocacy efforts will soon become irrelevant because the world will change a lot soon, it could make sense to donate to charities that have impact quickly. Shrimp Welfare Project might qualify. But maybe it makes more sense to try to find a way to impact the welfare of animals in the post-AGI world somehow, even though it’s really unclear how to do that.
Regarding the cells I28:30, yes you could do that, it would change estimates for cage-free and broiler reforms. If you think these yearly probability that commitments become irrelevant should be higher, I’d be curious for which reason. Possible reasons I listed include x-risks, global catastrophic risks, societal collapse, cultured meat taking over, animals bred not to suffer, black swans.
For context, my choices for “Yearly decrease in probability that commitment is relevant” numbers are informed by this forecast which predicts that the number of chickens slaughtered for meat will be roughly the same in 2052 as it is now, but just 12% of what it is now in 2122. My value for 2122 is slightly lower, 11% because that meticulous question also has this condition: “If humanity goes extinct or ceases to have a developed society prior to a listed year, that sub-question will resolve as Ambiguous.” I only decreased the forecast for 2122 slightly because this forecast predicts that the probability of human extinction before 2100 is just 1%, although looking back at this, I think I could’ve adjusted for x-risks more because much higher estimates of x-risks seem reasonable.
Thank you for an interesting comment.
I’m aware of zdgroff’s analysis. In the context of my analysis, I guess it would inform how long the ban of fur farming in Poland might last. But the possibility of fur farming being banned in Poland and then the ban being lifted some years later hadn’t even occurred to me. I am much more worried about production moving to other countries to meet the same demand, as this has happened before. I imagine that investors into fur farming would choose to build farms in one of the many countries that allow fur farming, rather than lobby a country like Poland to rescind its ban.
Actually, it is also relevant for a possible EU cage-free ban. I can imagine that ban being rescinded. I don’t think this consideration would affect the results of my estimate much, though it does complicate thinking about how many years impact lasts a little bit.
Cost-effectiveness of Anima International Poland
Thanks for a very useful post. I can’t find: does this account for the differences in moral weight and probability of sentience between different animals. If yes, how?
Thanks for working on this. I just want to point out that if a charity helps say one animal per dollar, the real cost for the animal advocacy movement is a bit higher if you account for the following:
Opportunity cost of staff: People working at animal advocacy charities often accept lower salaries than they could earn elsewhere. Some might have been earning-to-give if they weren’t directly working in the field, potentially donating substantial amounts to animal causes.
Hidden costs: Volunteer time, pro bono services, and other non-monetary contributions often aren’t factored into cost calculations but represent real resources.
Diminishing returns: animal advocacy interventions may be becoming less cost-effective over time as the easiest wins (“low-hanging fruits”) were addressed first.
Failed interventions: For every successful approach we discover, there were likely multiple attempted interventions that didn’t work. The “research and development” costs of finding effective strategies should ideally be factored into overall movement costs.
I don’t know if I advise you to change anything based on this though. Your estimates are already quite conservative and perhaps it’s best to avoid complicating things with considerations like these.
Yeah, that doesn’t look right. I recommend looking at the spreadsheet rather than the post. I updated some parts of it at some point last year. I see in the spreadsheet that the 5th graph now looks like this
But I don’t know if that’s still up to date. I haven’t been following the progress lately, but many of these broiler commitments are not being implemented, unfortunately.
Interesting points. Starting with 27:45, there are two talks here that claim that AI will probably be bad for farmed animals. @Sam Tucker urges in his talk to work on banning AI in animal farms. There is also discussion on it at 58:27 where Sam says that he is 99% sure that AI in farms will be bad for animals, if I understood him correctly, partly because it might allow factory farming to stay around for longer. Perhaps you should discuss this issue with Sam.
I understand where you’re coming from but I wonder whether this would also have negative consequences. Perhaps it would increase the pace of AI development. It would make LLMs more useful, which might increase investments into AI even more. And maybe it would also make LLMs generally smarter, which could also accelerate AI progress (this is not my area, I’m just speculating). Some EA folks are protesting to pause AI, increased progress might not be great. It would help all the research, but not all research makes the world better. For example, it could benefit research into more efficient animal farming, which could be bad for animals. Considerations like these would make me too unsure about the sign of the impact to eagerly support such a cause, unfortunately.
I love the idea in your talk! I can imagine it changing the world a lot and that feels motivating. I wonder if more Founders Pledge members could be convinced to do this.
Many in EA focus on preventing a future self-improving superintelligent agent that might pursue some alien goal misaligned with human values. But this podcast made me realise that such an agent already exists—not as a conscious entity, but as an emergent, decentralized system. It’s what Scott Alexander called Moloch: the dynamics of markets, algorithms, status games, and incentive structures that collectively form a kind of self-improving, misaligned intelligence.
Screen time is one of the proxy goals it optimises for—not because anyone chose it, but because attention is monetisable. And now, Moloch is building more powerful AI, which risks accelerating its agenda, including screentime. A generation raised like this could bring us closer to something like Idiocracy—a society overwhelmed by problems, but cognitively unequipped to solve them. Maybe reducing harmful-type screentime isn’t just a public health move, maybe it’s part of fighting back.