Iâm earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iâm also on LessWrong and have a Substack blog.
Iâm earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iâm also on LessWrong and have a Substack blog.
Organisations using Rethink Prioritiesâ mainline welfare ranges should consider effects on soil nematodes, mites, and springtails.
The only argument I can think of against this would be optics. To be appealing to the public and a broad donor base, orgs might want to get off of the train to crazytown before this stop. (I assume this is why GiveWell ignores animal effects when assessing their interventionsâ impact, even though those swamp the effects on humans.) Even then, it would make sense to share these analyses with the community, even if they wouldnât be included in public-facing materials.
I think most views where nonhumans are moral patients imply these tiny animals could matter. Like most people, I find the implications of this incredibly unintuitive, but I donât think thatâs an actual argument against the view. I think our intuitions about interspecies tradeoffs, like our intuitions about partiality towards friends and family, can be explained by evolutionary pressures on social animals such as ourselves, so we shouldnât accord them much weight.
Hi guys, thanks for doing this sprint! Iâm planning on making most of my donations to AI for Animals this year, and would appreciate your thoughts on these followup questions:
You write that âWe also think some interventions that arenât explicitly focused on animals (or on non-human beings) may be more promising for improving animal welfare in the longer-run future than any of the animal-focused projects we consideredâ. Which interventions, and for which reasons?
Would your tentative opinion be more bullish on AI for Animalsâ movement-building activities than on work like AnimalHarmBench? Is there anything you think AI for Animals should be doing differently from what theyâre currently doing?
Do you know of anyone working (or interested in working) on the movement strategy research questions you discuss?
Do you have any tentative thoughts on how animal/âdigital mind advocates should think about allocating resources between (a) influencing the âtransformedâ post-shift world as discussed in your post and (b) ensuring AI is aligned to human values today?
Depopulation is Bad
Assuming utilitarian-ish ethics and that the average person lives a good life, this follows.
The question gets much more uncertain once you account for wild animal effects, but it seems likely to me that the average wild animal lives a bad life, and human activity reduces wild animal populations, which supports the same conclusion.
This year I donated to the Arthropoda Foundation!
One reason to perhaps wait before offsetting your lifetime impact all at once could be to preserve your capitalâs optionality. Cultivated meat could in the future become common and affordable, or your dietary preferences could otherwise change such that $10k was too much to spend.
Your moral views on offsetting could also change. For example, you might decide that the $10k would be better spent on longtermist causes, or that itâd be strictly better to donate the $10k to the most cost-effective animal charity rather than offsetting.
I basically never eat chicken
Thatâs awesome. That probably gets you 90% of the way there already, even if there were no offset!
I think thatâs a great point! Theoretically, we should count all of those foundations and more, since theyâre all parts of âthe portfolio of everyoneâs actionsâ. (Though this would simply further cement the takeaway that global health is overfunded.)
Some reasons for focusing our optimization on âEAâs portfolioâ specifically:
Believing that non-EA-aligned actions have negligible effect compared to EA-aligned actions.
Since we wouldnât have planned to donate to ineffective interventions/âcause areas anyway, itâs unclear what effect including those in the portfolio would have on our decisionmaking, which is one reason why they may be safely ignorable.
Itâs far more tractable to derive EAâs portfolio than the portfolio of everyoneâs actions, or even the portfolio of everyoneâs charitable giving.
But I agree that these reasons arenât necessarily decisive. I just think there are enough reasons to do so, and this assumption has enough simplifying power, that for me itâs worth making.
Thanks for this research! Do you know whether any BOTECs have been done where an intervention can be said to create X vegan-years per dollar? Iâve been considering writing an essay pointing meat eaters to cost-effective charitable offsets for meat consumption. So far, I havenât found any rigorous estimates online.
(I think farmed animal welfare interventions are likely even more cost-effective and have a higher probability of being net positive. But it seems really difficult to know how to trade off the moral value of chickens taken out of cages /â shrimp stunned versus averting some number of years of meat consumption.)
I donât think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universeâs value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.
I argue here against the view that animal welfareâs diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.
So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, Iâd actually expect that OPâs full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, Iâd echo Jeffâs suggestion that you should âtop upâ OPâs grants.
Does portfolio theory apply better at the individual level than the community level?
I think the individual level applies if you have risk aversion on a personal level. For example, I care about having personally made a difference, which biases me towards certain individually less risky ideas.
is this âk-level 2â aggregate portfolio a âbetterâ aggregation of everyoneâs information than the âk-level 1âł of whatever portfolio emerges from everyone individually optimising their own portfolios?
I think itâs a tough situation because k=2 includes these unsavory implications Jeff and I discuss. But as I wrote, I think k=2 is just what happens when people think about everyoneâs donations game-theoretically. If everyone else is thinking in k=2 mode but youâre thinking in k=1 mode, youâre going to get funged such that your value systemâs expression in the portfolio could end up being much less than what is âfairâ. Itâs a bit like how the Nash equilibrium in the Prisonerâs Dilemma is âdefect-defectâ.
At some point what matters is specific projects...?
I agree with this. My post frames the discussion in terms of cause areas for simplicity and since the lessons generalize to more people, but I think your point is correct.
I just wanted to say I really liked this post and consider it a model example of reasoning transparency!
I think animal welfare as a cause area is important and neglected within EA. Invertebrates have been especially neglected since Open Phil pulled out of the space, so my top choices are the Arthropoda Foundation and Shrimp Welfare Project (SWP).
With high uncertainty, I weakly prefer Arthropoda over SWP on the margin. Time is running short to influence the trajectory of insect farming in its early stages. The quotes for Arthropodaâs project costs and overhead seem very reasonable. Also, while SWPâs operational costs are covered through 2026, Arthropodaâs projects may not happen at all without marginal funding, so donations to Arthropoda feel more urgent to me since theyâre more existential. But all of this is held loosely and Iâm very open to counterarguments.
Anecdotally, most people I know who Iâve asked do that!
I think these unsavory implications you enumerate are just a consequence of applying game theory to donations, rather than following specifically from my postâs arguments.
For example, if Bob is all-in on avoiding funging and doesnât care about norms like collaboration and transparency, his incentives are exactly as you describe: Give zero information about his value system, and make donations secretly after other funders have shown their hands.
I think youâre completely right that those are awful norms, and we shouldnât go all-in on applying game theory to donations. This goes both for avoiding funging and for my postâs argument about optimizing âEAâs portfolioâ.
However, just as we can learn important lessons from the concept of funging while discouraging the bad, I still think this post is valuable and includes some nontrivial practical recommendations.
Thanks for this; I agree that âintegrity vs impactâ is a more precise cleavage point for this conversation than âcause-first vs member-firstâ.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
Unhelpfully, Iâd say it depends on the tradeoffâs details. I certainly wouldnât advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, Iâd currently prefer the marginal 1M be given to EA Fundsâ Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EAâs epistemics.
It seems to me that I think the EA community has a lot more âalignment/âintegrityâ than you do. This could arise from empirical disagreements, different definitions of âalignment/âintegrityâ, and/âor different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/âintegrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesnât have tradeoffs, and werenât corrected by other community members. While Iâd prefer people say true things to false things, especially when they affect peopleâs health, this just doesnât feel important enough to update upon. (Iâve also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that itâs clear that there are deep systemic issues with the communityâs epistemics. If thereâs a lot more evidence on this which I havenât seen, Iâd love to hear about it!
Thanks for the interesting conversation! Some scattered questions/âobservations:
Your conversation reminds me of the debate about whether EA should be cause-first or member-first.
My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, Iâd call myself EA.
Elizabethâs self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
This might explain the difference between my and Elizabethâs attitudes about the importance of some EAs claiming that veganism doesnât entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but Iâm far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabethâs perspective, this is so important. Do you think this is a fair characterization?
Iâd love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganismâs health tradeoffs relative to vegan advocacy:
If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganismâs health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganismâs health tradeoffs relative to vegan advocacy.
By analogy, this feels like sounding an alarm because EAâs kidney donation advocates havenât sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isnât kidney donation clearly the moral imperative?
(I didnât downvote your comment, by the way.)
I feel bad that my comment made you (and a few others, judging by your commentâs agreevotes) feel bad.
As JackM points out, that snarky comment wasnât addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: Thereâs no theoretical reason why oneâs ethical system should lexicographically prefer one race/âgender/âspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like âwe have the right to exploit animals because weâre stronger than themâ, or âexploiting animals is the natural orderâ, which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally donât even argue for hierarchicalism because itâs just such a dubious view. I wouldnât write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that âanimals donât count at allâ. I think itâs somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didnât really justify his view in his comment thread. Iâve never read Zvi justify that view anywhere either. Iâve heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term âoverwhelmingâ because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, youâd need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Julesâ argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you donât endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. Thereâs just no prior for why that would be the case.
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, theyâd have to be really certain that animals arenât conscious to endorse global health here. Even if thereâs a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think theyâd still merit a significant fraction of EA funding. (Probably still more than theyâre currently receiving.)
I think itâs fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/âpainkillers/âsocial interaction as humansâ are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while Iâm not a consciousness expert at all, the New York Declaration on Animal Consciousness says that âthere is strong scientific support for attributions of conscious experience to other mammals and to birdsâ. Rethink Prioritiesâ and Luke Muehlhauserâs work for Open Phil corroborate that. So Yudâs view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yudâs Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didnât admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didnât make any attempt to justify them. So I didnât find anything about his Facebook post convincing.
To me, the strongest reason to believe that animals donât count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I havenât read anything remotely convincing that justifies that view on the merits. Thatâs why I didnât even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
They didnât have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
They may have felt like most EAs donât share the basic intuitions underlying their views, so theyâd be talking to a wall. The idea that pigs arenât conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but Iâd need to see way more justification than Iâve seen.
in 2017, Holdenâs personal reflections âindicate against the idea that e.g. chickens merit moral concernâ. In 2018, Holden stated that âthere is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not âconsciousâ in a morally relevant wayâ.
@AGB đ¸ would you be willing to provide brief sketches of some of these stronger arguments for global health which werenât covered during the Debate Week? Like Nathan, Iâve spent a ton of time discussing this issue with other EAs, and I havenât heard any arguments Iâd consider strong for prioritizing global health which werenât mentioned during Debate Week.
My donations to effective charities are by far the most impactful thing Iâve ever done in my life, and that could not have happened without EA.