Managing Director at Hive. Effective Altruism and Animal Advocacy Community Builder, experience in national, local and cause-area specific community building. Amateur Philosopher, particularly keen on moral philosophy.
Kevin Xia đ¸
Thank you for sharing your paper, Vera! I have been trying to discuss and understand a lot of adjacent themes around foundational philosophical assumptions, ad absurdum arguments, and moral intuitions vs. strict theory. Since youâre asking how this will land with the audience here, Iâd like to offer my personal account based on engaging with many EA(A)s. This is very much my subjective impression, not endorsed by any one person I discussed this with in particular. Also, I say a lot of âtheyâ hereâto be transparent, I endorse most, but not all, of these stances myself.
My impression is that EAs are largely aware of the counterintuitive implications their theory of choice faces. This has broadly been my experience with consequentialist-leaning people: they rarely claim, or even want to claim, that their ethical theory is perfect or aligned with all intuitions. They just believe the counterintuitive implications of their theory are less counterintuitive than those of other theories, and/âor find other theories either inconsistent or arbitrary.
Underlying this is a general desire to reason through ethics, and underlying that is a general skepticism toward moral intuition as a definitive argument. My impression is that EAs are very analytical in their approach to philosophy, and as a result, they often donât consider their own moral intuitions particularly trustworthy â to varying degrees; some might want to abandon them altogether, others simply donât weigh them heavily.
I think these two points are why many EAs would read the paper and think something like: âYes, I know. This isnât surprising. And also, show me a theory that doesnât run into such issues.â
However, most EAs donât fundamentally start from a purely philosophical stance on what is true in ethics and try to apply it to all their actions. They âfirstâ want to do good and almost instrumentally try to figure out what that means. I think most EAs are âquasi-consequentialistâ: when pressed, or when wanting to defend their views in a theoretical discussion, they consider consequentialism the strongest perspective to take â perhaps because they find it least counterintuitive, or closest to explaining how they conceptualize ethics.
When put into practice, this stance becomes largely action-guiding. It acts as a first proxy for identifying what to do or which choice is better. But unlike in theoretical discussions, a reductio ad absurdum isnât just a âbullet to biteâ where one can rest calmly on knowing that others have âbigger bullets to biteâ â itâs a practical blocker that is rarely broken through. I think thatâs why everyone is âleaning utilitarianâ or âleaning consequentialistâ: the theory acts as a guide and pushes the limits of oneâs otherwise-unquestioned moral intuitions to some degree, but not far beyond what one would consider generally reasonable. This is also exemplified by how often I hear things like âI know that X is probably right, but I just donât feel comfortable doing that.â Strict theory comes a close second, but almost no one completely abandons their moral intuitions in favour of it.
I think a neat resolution to all of this is the concept of moral uncertainty. I wouldnât confidently claim that the people I describe above are likely to explicitly endorse this framing, but I think it explains much of the friction between theory and intuition. Under moral uncertainty, one doesnât simply act on oneâs best-guess ethical theory; one hedges across plausible theories, weighted by credence. That naturally prevents the kind of single-theory extremes that generate repugnant/âcounterintuitive conclusions in practice, even while allowing consequentialism to carry significant weight. This sort of uncertainty, I think, is very much in line with typical EAsâ general way of thinking.
I think thatâs why most EAs wonât feel particularly âaddressedâ by this or similar arguments â and why theyâll likely end up with something like: âYeah, I know this could be an issue, but I wouldnât do this anyway.â I also think this explains why many will first try to argue based on empirical assumptions (e.g., can CAFOâs even be net-positive).
Hope this makes sense, curious to hear whether this has mirrored your experience discussing this piece :)
ReÂquest for ProÂposÂals for AI x Animals
It might allow for more nuanced and actionable discussion to ask âhow goodââperhaps something like âPromoting Ozempic will be among the most cost-effective ways to help animals.â
Not necessarily the âbiggestâ win, but one that I didnât see coming and think is underrated is:
Malaysiaâs Islamic Authority Declares Cultivated Meat Can Be Halal (First Muslim-Majority Country)
Important by itself, but even moreso under (AI-)accelerated alt-protein situations, where non-technological barriers to adoption (such as: is cultivated meat halal?) can pose as unnecessary barriers that can be addressed already.
Hiveâs 2025 in ReÂview and 2026 Plans and FundÂing Needs
Really enjoyed reading this post!
You can influence Big Normie Foundation to move $1,000,000 from something generating 0 unit of value per dollar (because it is useless) to something generating 10 units of value per dollar.
This example reminded me of something similar I have been meaning to write about, but @AppliedDivinityStudies got there before me (and did so much better than I could have!) - it is not just that influencing Big Normie Foundations could produce the same marginal impact due to a lower counterfactual, but also that there is way more money in them.
I think one can conceptualize impact as a function of how much influence we are affecting, where it is moving from (e.g., the counterfactual badness/âlack-of-goodness), and where it is moving to. It seems to me like we are overly focused on affecting where the influence is moving to. Perhaps justifiably so, for the objections you mention in the post, but it seems far from obvious that we are focus is optimally balanced.
Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:
âHow likely is it that a world where AI goes well for humans also goes well for other sentient beings?â
It could probably be much more precise and nuanced, but specifically, I would want to assess whether âtrying to make AI go well for all sentient beingsâ is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measuresâthe latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me.
I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering aboutâassuming that explicit marginal cost-effectiveness estimates arenât really possible, this seems like the most common proxy I refer to that I am missing solid numbers on.
Super interesting read, thanks for writing this! I have been thinking a bit about the US and China in an AI race and was wondering whether I could get your thoughts on two things I have been unsure about:
1) Can we expect the US to remain a liberal democracy once it develops AGI? (I think I first saw this point brought up in a comment here), especially given recent concerns around democratic backsliding? (And if we canât, would AGI under the US still be better?)
2) On animal welfare specifically, Iâm wondering whether the very pragmatic, techno-optimistic, efficiency stance of China could make a pivot to alternative proteins (assuming they are an ultimately more efficient product) more likely than in the US, where alt-proteins might be more of a politically charged topic?
I donât have strong opinions on either, but these two points first nudged me to be significantly less confident in my prior preference for the US in this discussion.
Interestingly, Claudeâs numbers would actually suggest that BOAS is a higher EV decision (for some reason, it appears to double-count the risk; I.e., it took the EV which takes 60% failure into account and multiplied it again by 0.4).
Not that anyone here should (or would) make these decisions based on unchecked Claude BOTECs anyway; just found it to be an interesting flaw.
StrateÂgic ConÂsidÂerÂaÂtions from AI and AlterÂnaÂtive Proteins
ReflecÂtions on AI Safety vs. AI x AnÂiÂmals withÂout a clear conclusion
BuildÂing an ImÂpact-foÂcused Community
Always!
Just wanted to drop by and say that I have been really enjoying this sequence, and I deeply resonate with this idea of divine discontent!
I would like to add to this and applaud Vasco for being such a good sport about this, sharing the draft with me in advance and engaging in an unusually civil and productive back and forth with me to clear up misunderstandings, including nitpicky nuances and issues that arose from my own miscommunication. To anyone who would like to share feedback or ways to improve our community guidelines, but prefers no to do so publicly, you can also reach me/âus per dm here on the Forum/âE-mail/âSlack, and we have an anonymous form! Although we do generally think that a public discussion here could be valuable for other community spaces as well. I would also like toâdespite thisâthank you, Vasco, for being a valued community member and for your exceptional moral seriousness/âcommitment to taking ideas seriously and care.
ConÂsider thankÂing whoÂever helped you
Strong agree! I also often get asked âwhy push careers, if the movement is primarily funding constrainedâ - itâs almost as though there is a bit of a misconception around the idea that only non profit work is a âcareer that helps animalsâ and I think part of this is that there is no good guide on making an impact in adjacent areas (outside of E2G perhaps). Iâm very excited to see the research you are producing!
Effektiv Spenden has donation vouchers that seem roughly in line with what you are thinking of!
Great post, thanks for looking into this! I previously noted four different types of interventions one might want to prioritize given AIxAnimals; Iâd love to hear your thoughts on the implications on this intersection from a broader, zoomed out perspective!
On the first pointâthat seems right; I think in a discussion like this, there can be a lot of confusion and conflation about what is meant by net negative welfare, lives worth living/âbarely worth living, too what degree one can and should trust empirical assessments about the welfare of animals, etc. - my best guess is that people are typically âsomewhatâ risk-averse and âsomewhatâ negative leaning consequentialist; so the bar for empirical evidence to show that chickens live net positive lives is intuitively set higher; both for how solid it is as well as how positive. That being said, I do think one can have distrust in intuitions as information for moral judgments while leaning on them for empirical questionsâthat doesnât seem inherently at odds for me. (I do think that the latter still clashes with âusing evidence and reason,â of course, but can be âexplained forâ with risk aversion and negative-leaning positionsâwhich would change what â⌠to do the most goodâ means). But at this point, I am just speculating about what people are thinking about in making these arguments.
On the second point; my impression is that EAs rarely completely abandon moral intuition. They donât consider it particularly trustworthy, but donât think itâs useless either. It serves some function (e.g., find the internally consistent theory that is least counterintuitive or that satisfies the most or the deepest lying moral intuitions), but then theyâd basically have the theory take it from there (in theoretical discussion; once again, this tends to be different when actually being acted upon). I agree that it is plausibly arbitrary (where to draw the line at which to abandon moral intuitions; others might disagree with calling that arbitrary); but I donât think it usually serves prior commitments (in my experience, EAs are the social impact group that are most open to just changing commitments). That being said, I do think that some form of this (drawing a seemingly arbitrary line from where to not trust moral intuition) is true for effectively everyone who doesnât completely lean into moral intuitionism. My best guess explanation here is basically what I expressed in the last three points of my initial comment; most (perhaps all) EAs I am thinking of in this context are âdoing-goodâ first; and underlying that is a strong moral compass/âintuition that can be at real practical tension with trying to abandon moral intuition as information, so they try to find the right balance; with the balance being on-average more in favor of a well reasoned through theory vs. intuition; but not completely.