Iām earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iām also on LessWrong and have a Substack blog.
Iām earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iām also on LessWrong and have a Substack blog.
Sentient Futures
Arthropoda remains my top pick out of those listed, but I chose Shrimp Welfare Project followed by the EA Animal Welfare Fund as my top two votes for strategic voting reasons.
I still think there are strong arguments for animal welfare dominating global health (at least on first-order effects), and that animal welfare is much more funding constrained and neglected than AI safety. (Invertebrates and wild animals still seem like the most impactful and neglected opportunities in animal welfare.) This year, Iām donating to Sentient Futures to try to improve coordination between advocates for neglected beings and the AI space.
Iād be doing less good with my life if I hadnāt heard of effective altruism
My donations to effective charities are by far the most impactful thing Iāve ever done in my life, and that could not have happened without EA.
Organisations using Rethink Prioritiesā mainline welfare ranges should consider effects on soil nematodes, mites, and springtails.
The only argument I can think of against this would be optics. To be appealing to the public and a broad donor base, orgs might want to get off of the train to crazytown before this stop. (I assume this is why GiveWell ignores animal effects when assessing their interventionsā impact, even though those swamp the effects on humans.) Even then, it would make sense to share these analyses with the community, even if they wouldnāt be included in public-facing materials.
I think most views where nonhumans are moral patients imply these tiny animals could matter. Like most people, I find the implications of this incredibly unintuitive, but I donāt think thatās an actual argument against the view. I think our intuitions about interspecies tradeoffs, like our intuitions about partiality towards friends and family, can be explained by evolutionary pressures on social animals such as ourselves, so we shouldnāt accord them much weight.
Hi guys, thanks for doing this sprint! Iām planning on making most of my donations to AI for Animals this year, and would appreciate your thoughts on these followup questions:
You write that āWe also think some interventions that arenāt explicitly focused on animals (or on non-human beings) may be more promising for improving animal welfare in the longer-run future than any of the animal-focused projects we consideredā. Which interventions, and for which reasons?
Would your tentative opinion be more bullish on AI for Animalsā movement-building activities than on work like AnimalHarmBench? Is there anything you think AI for Animals should be doing differently from what theyāre currently doing?
Do you know of anyone working (or interested in working) on the movement strategy research questions you discuss?
Do you have any tentative thoughts on how animal/ādigital mind advocates should think about allocating resources between (a) influencing the ātransformedā post-shift world as discussed in your post and (b) ensuring AI is aligned to human values today?
Depopulation is Bad
Assuming utilitarian-ish ethics and that the average person lives a good life, this follows.
The question gets much more uncertain once you account for wild animal effects, but it seems likely to me that the average wild animal lives a bad life, and human activity reduces wild animal populations, which supports the same conclusion.
This year I donated to the Arthropoda Foundation!
One reason to perhaps wait before offsetting your lifetime impact all at once could be to preserve your capitalās optionality. Cultivated meat could in the future become common and affordable, or your dietary preferences could otherwise change such that $10k was too much to spend.
Your moral views on offsetting could also change. For example, you might decide that the $10k would be better spent on longtermist causes, or that itād be strictly better to donate the $10k to the most cost-effective animal charity rather than offsetting.
I basically never eat chicken
Thatās awesome. That probably gets you 90% of the way there already, even if there were no offset!
I think thatās a great point! Theoretically, we should count all of those foundations and more, since theyāre all parts of āthe portfolio of everyoneās actionsā. (Though this would simply further cement the takeaway that global health is overfunded.)
Some reasons for focusing our optimization on āEAās portfolioā specifically:
Believing that non-EA-aligned actions have negligible effect compared to EA-aligned actions.
Since we wouldnāt have planned to donate to ineffective interventions/ācause areas anyway, itās unclear what effect including those in the portfolio would have on our decisionmaking, which is one reason why they may be safely ignorable.
Itās far more tractable to derive EAās portfolio than the portfolio of everyoneās actions, or even the portfolio of everyoneās charitable giving.
But I agree that these reasons arenāt necessarily decisive. I just think there are enough reasons to do so, and this assumption has enough simplifying power, that for me itās worth making.
Thanks for this research! Do you know whether any BOTECs have been done where an intervention can be said to create X vegan-years per dollar? Iāve been considering writing an essay pointing meat eaters to cost-effective charitable offsets for meat consumption. So far, I havenāt found any rigorous estimates online.
(I think farmed animal welfare interventions are likely even more cost-effective and have a higher probability of being net positive. But it seems really difficult to know how to trade off the moral value of chickens taken out of cages /ā shrimp stunned versus averting some number of years of meat consumption.)
I donāt think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universeās value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.
I argue here against the view that animal welfareās diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.
So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, Iād actually expect that OPās full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, Iād echo Jeffās suggestion that you should ātop upā OPās grants.
Does portfolio theory apply better at the individual level than the community level?
I think the individual level applies if you have risk aversion on a personal level. For example, I care about having personally made a difference, which biases me towards certain individually less risky ideas.
is this āk-level 2ā aggregate portfolio a ābetterā aggregation of everyoneās information than the āk-level 1ā³ of whatever portfolio emerges from everyone individually optimising their own portfolios?
I think itās a tough situation because k=2 includes these unsavory implications Jeff and I discuss. But as I wrote, I think k=2 is just what happens when people think about everyoneās donations game-theoretically. If everyone else is thinking in k=2 mode but youāre thinking in k=1 mode, youāre going to get funged such that your value systemās expression in the portfolio could end up being much less than what is āfairā. Itās a bit like how the Nash equilibrium in the Prisonerās Dilemma is ādefect-defectā.
At some point what matters is specific projects...?
I agree with this. My post frames the discussion in terms of cause areas for simplicity and since the lessons generalize to more people, but I think your point is correct.
I just wanted to say I really liked this post and consider it a model example of reasoning transparency!
I think animal welfare as a cause area is important and neglected within EA. Invertebrates have been especially neglected since Open Phil pulled out of the space, so my top choices are the Arthropoda Foundation and Shrimp Welfare Project (SWP).
With high uncertainty, I weakly prefer Arthropoda over SWP on the margin. Time is running short to influence the trajectory of insect farming in its early stages. The quotes for Arthropodaās project costs and overhead seem very reasonable. Also, while SWPās operational costs are covered through 2026, Arthropodaās projects may not happen at all without marginal funding, so donations to Arthropoda feel more urgent to me since theyāre more existential. But all of this is held loosely and Iām very open to counterarguments.
Anecdotally, most people I know who Iāve asked do that!
I think these unsavory implications you enumerate are just a consequence of applying game theory to donations, rather than following specifically from my postās arguments.
For example, if Bob is all-in on avoiding funging and doesnāt care about norms like collaboration and transparency, his incentives are exactly as you describe: Give zero information about his value system, and make donations secretly after other funders have shown their hands.
I think youāre completely right that those are awful norms, and we shouldnāt go all-in on applying game theory to donations. This goes both for avoiding funging and for my postās argument about optimizing āEAās portfolioā.
However, just as we can learn important lessons from the concept of funging while discouraging the bad, I still think this post is valuable and includes some nontrivial practical recommendations.
Thanks for this; I agree that āintegrity vs impactā is a more precise cleavage point for this conversation than ācause-first vs member-firstā.
Would you sometimes advocate for prioritizing impact (e.g. SUM shipping resources towards interventions) over alignment within the EA community?
Unhelpfully, Iād say it depends on the tradeoffās details. I certainly wouldnāt advocate to go all-in on one to the exclusion of the other. But to give one example of the way I think, Iād currently prefer the marginal 1M be given to EA Fundsā Animal Welfare Fund than used to establish a foundation to investigate and recommend improvements to EAās epistemics.
It seems to me that I think the EA community has a lot more āalignment/āintegrityā than you do. This could arise from empirical disagreements, different definitions of āalignment/āintegrityā, and/āor different expectations we place on the community.
For example, the evidence Elizabeth presented of a lack of alignment/āintegrity in EA is that some veganism advocates on Facebook incorrectly claimed that veganism doesnāt have tradeoffs, and werenāt corrected by other community members. While Iād prefer people say true things to false things, especially when they affect peopleās health, this just doesnāt feel important enough to update upon. (Iāve also just personally never heard any vegan advocate say anything like this, so it feels like an isolated case.)
One thing that could change my mind is learning about many more cases to the point that itās clear that there are deep systemic issues with the communityās epistemics. If thereās a lot more evidence on this which I havenāt seen, Iād love to hear about it!
Thanks for the interesting conversation! Some scattered questions/āobservations:
Your conversation reminds me of the debate about whether EA should be cause-first or member-first.
My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, Iād call myself EA.
Elizabethās self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
This might explain the difference between my and Elizabethās attitudes about the importance of some EAs claiming that veganism doesnāt entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but Iām far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabethās perspective, this is so important. Do you think this is a fair characterization?
Iād love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganismās health tradeoffs relative to vegan advocacy:
If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganismās health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganismās health tradeoffs relative to vegan advocacy.
By analogy, this feels like sounding an alarm because EAās kidney donation advocates havenāt sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isnāt kidney donation clearly the moral imperative?
Beautiful post. I especially enjoyed the personal images and wish more EA Forum posts did that.