Managing Director at Hive. Effective Altruism and Animal Advocacy Community Builder, experience in national, local and cause-area specific community building. Amateur Philosopher, particularly keen on moral philosophy.
Kevin Xia šø
Hiveās 2025 in ReĀview and 2026 Plans and FundĀing Needs
Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:
āHow likely is it that a world where AI goes well for humans also goes well for other sentient beings?ā
It could probably be much more precise and nuanced, but specifically, I would want to assess whether ātrying to make AI go well for all sentient beingsā is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measuresāthe latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me.
I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering aboutāassuming that explicit marginal cost-effectiveness estimates arenāt really possible, this seems like the most common proxy I refer to that I am missing solid numbers on.
Super interesting read, thanks for writing this! I have been thinking a bit about the US and China in an AI race and was wondering whether I could get your thoughts on two things I have been unsure about:
1) Can we expect the US to remain a liberal democracy once it develops AGI? (I think I first saw this point brought up in a comment here), especially given recent concerns around democratic backsliding? (And if we canāt, would AGI under the US still be better?)
2) On animal welfare specifically, Iām wondering whether the very pragmatic, techno-optimistic, efficiency stance of China could make a pivot to alternative proteins (assuming they are an ultimately more efficient product) more likely than in the US, where alt-proteins might be more of a politically charged topic?
I donāt have strong opinions on either, but these two points first nudged me to be significantly less confident in my prior preference for the US in this discussion.
Interestingly, Claudeās numbers would actually suggest that BOAS is a higher EV decision (for some reason, it appears to double-count the risk; I.e., it took the EV which takes 60% failure into account and multiplied it again by 0.4).
Not that anyone here should (or would) make these decisions based on unchecked Claude BOTECs anyway; just found it to be an interesting flaw.
StrateĀgic ConĀsidĀerĀaĀtions from AI and AlterĀnaĀtive Proteins
ReflecĀtions on AI Safety vs. AI x AnĀiĀmals withĀout a clear conclusion
BuildĀing an ImĀpact-foĀcused Community
Always!
Just wanted to drop by and say that I have been really enjoying this sequence, and I deeply resonate with this idea of divine discontent!
I would like to add to this and applaud Vasco for being such a good sport about this, sharing the draft with me in advance and engaging in an unusually civil and productive back and forth with me to clear up misunderstandings, including nitpicky nuances and issues that arose from my own miscommunication. To anyone who would like to share feedback or ways to improve our community guidelines, but prefers no to do so publicly, you can also reach me/āus per dm here on the Forum/āE-mail/āSlack, and we have an anonymous form! Although we do generally think that a public discussion here could be valuable for other community spaces as well. I would also like toādespite thisāthank you, Vasco, for being a valued community member and for your exceptional moral seriousness/ācommitment to taking ideas seriously and care.
ConĀsider thankĀing whoĀever helped you
Strong agree! I also often get asked āwhy push careers, if the movement is primarily funding constrainedā - itās almost as though there is a bit of a misconception around the idea that only non profit work is a ācareer that helps animalsā and I think part of this is that there is no good guide on making an impact in adjacent areas (outside of E2G perhaps). Iām very excited to see the research you are producing!
Effektiv Spenden has donation vouchers that seem roughly in line with what you are thinking of!
Great post, thanks for looking into this! I previously noted four different types of interventions one might want to prioritize given AIxAnimals; Iād love to hear your thoughts on the implications on this intersection from a broader, zoomed out perspective!
I found this post deeply and wonderfully relatable, especially the section on why you didnāt pursue Philosophy! :)
ReflectĀing On My CaĀreer Journey
I am sure someone has mentioned this before, butā¦
For the longest time, and to a certain extent still, I have found myself deeply blocked from publicly sharing anything that wasnāt significantly original. Whenever I have found an idea existing anywhere, even if it was a footnote on an underrated 5-karma-post, I would be hesitant to write about it, since I thought that I wouldnāt add value to the āmarketplace of ideas.ā In this abstract concept, the āidea is already out thereāāso the job is done, the impact is set in place. I have talked to several people who feel similarly; people with brilliant thoughts and ideas, who proclaim to have ānothing original to write aboutā and therefore refrain from writing.
I have come to realize that some of the most worldview-shaping and actionable content I have read and seen was not the presentation of a uniquely original idea, but often a better-presented, better-connected, or even just better-timed presentation of existing ideas. I now think of idea-sharing as a much more concrete, but messy contributor to impact, one that requires the right people to read the right content in the right way at the right time; maybe even often enough, sometimes even from the right person on the right platform, etc.
All of that to say, the impact of your idea-sharing goes much beyond the originality of your idea. If you have talked to several cool people in your network about something and they found it interesting and valuable to hear, consider publishing it!
Relatedly, there are many more reasons to write other than sharing original ideas and saving the world :)
Great point, Michael! I agree on discounting potential counterfactual impacts of current interventions past X years and think that short-term large payoffs are a very good way of dealing with the overall situation. In addition to that, Iād argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should āsave and investā (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?
To me, this suggests prioritizing (1) short-term, large payoff interventions, (2) interventions actively seeking to navigate and benefit animals through an AI transition (depending on how optimistic you are about the tractability of doing so), (3) interventions that robustly invest in movement capacity (depending on whether you think interventions are likely to be more cost-effective in the future), and perhaps (4) interventions that seem unlikely to change through an AI transition (depending on how optimistic you are in their current cost-effectiveness and how high your credence is in their robustness).
Really enjoyed reading this post!
This example reminded me of something similar I have been meaning to write about, but @AppliedDivinityStudies got there before me (and did so much better than I could have!) - it is not just that influencing Big Normie Foundations could produce the same marginal impact due to a lower counterfactual, but also that there is way more money in them.
I think one can conceptualize impact as a function of how much influence we are affecting, where it is moving from (e.g., the counterfactual badness/ālack-of-goodness), and where it is moving to. It seems to me like we are overly focused on affecting where the influence is moving to. Perhaps justifiably so, for the objections you mention in the post, but it seems far from obvious that we are focus is optimally balanced.