Iâm earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iâm also on LessWrong and have a Substack blog.
Iâm earning to give as a Quant Researcher at the Quantic group at Walleye Capital, a hedge fund. In my free time, I enjoy reading, discussing moral philosophy, and dancing bachata and salsa.
Iâm also on LessWrong and have a Substack blog.
Thanks for the compliment :)
When I write âskepticism of formal philosophyâ, I more precisely mean âskepticism that philosophical principles can capture all of whatâs intuitively importantâ. Hereâs an example of skepticism of formal philosophy from Scott Alexanderâs review of What We Owe The Future:
Iâm not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If thatâs true, I will just not do that, and switch to some other set of axioms. If I canât find any system of axioms that doesnât do something terrible when extended to infinity, I will just refuse to extend things to infinity...I realize this is âanti-intellectualâ and âdefeating the entire point of philosophyâ.
You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my postâs analogy is crude and there are many reasons why peopleâs dispositions might favor AI x-risk reduction over animal welfare.
Thanks Gage!
Thatâs a good point I hadnât considered! I donât think thatâs OPâs crux, but it is a coherent explanation of their neartermist cause prioritization.
Absolutely! Most of whatâs important in this essay is just a restatement of your inspiring CEA from months ago :)
This extra context makes the case much stronger.
Thanks for being charitable :)
On the percentile of a product of normal distributions, I wrote this Python script which shows that the 5th percentile of a product of normally distributed random variables will in general be a product of much higher percentiles (in this case, the 16th percentile):
import random
MU = 100
SIGMA = 10
N_SAMPLES = 10 ** 6
TARGET_QUANTILE = 0.05
INDIVIDUAL_QUANTILE = 83.55146375 # From Google Sheets NORMINV(0.05,100,10)
samples = []
for _ in range(N_SAMPLES):
r1 = random.gauss(MU, SIGMA)
r2 = random.gauss(MU, SIGMA)
r3 = random.gauss(MU, SIGMA)
sample = r1 * r2 * r3
samples.append(sample)
samples.sort()
# The sampled 5th percentile product
product_quantile = samples[int(N_SAMPLES * TARGET_QUANTILE)]
implied_individual_quantile = product_quantile ** (1/3)
implied_individual_quantile # ~90, which is the *16th* percentile by the empirical rule
I apologize for overstating the degree to which this reversion occurs in my original reply (which claimed an individual percentile of 20+ to get a product percentile of 5), but I hope this Python snippet shows that my point stands.
I did explicitly say that my calculation wasnât correct. And with the information on hand I canât see how I couldâve done better.
This is completely fair, and Iâm sorry if my previous reply seemed accusatory or like it was piling on. If I were you, Iâd probably caveat your analysisâs conclusion to something more like âUnder RPâs 5th percentile weights, the cost-effectiveness of cage-free campaigns would probably be lower than that of the best global health interventionsâ.
Hi Hamish! I appreciate your critique.
Others have enumerated many reservations with this critique, which I agree with. Here Iâll give several more.
why isnât the â1000xâ calculation actually spelled out?
As youâve seen, given Rethinkâs moral weights, many plausible choices for the remaining âmade-upâ numbers give a cost-effectiveness multiple on the order of 1000x. Vasco Grilo conducted a similar analysis which found a multiple of 1.71k. I didnât commit to a specific analysis for a few reasons:
I agree with your point that uncertainty is really high, and I donât want to give a precise multiple which may understate the uncertainty.
Reasonable critiques can be made of pretty much any assumptions made which imply a specific multiple. Though these critiques are important for robust methodology, I wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism. I believe that given Rethinkâs moral weights, a cost-effectiveness multiple on the order of 1000x will be found by most plausible choices for the additional assumptions.
(Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but Iâm not sure thereâs a better approach without more information about the input distributions.)
Sadly, I donât think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentilesâin fact, in general, itâs going to be a product of much higher percentiles (20+).
To see this, imagine if a bridge is held up 3 spokes which are independently hammered in, and each spoke has a 5% chance of breaking each year. For the bridge to fall, all 3 spokes need to break. Thatâs not the same as the bridge having a 5% chance of falling each yearâthe chance is actually far lower (0.01%). For the bridge to have a 5% chance of falling each year, each spoke would need to have a 37% chance of breaking each year.
As you stated, knowledge of distributions is required to rigorously compute percentiles of this product, but it seems likely that the 5th percentile case would still have the multiple several times that of GiveWell top charities.
letâs not forget second order effects
This is a good point, but the second order effects of global health interventions on animals are likely much larger in magnitude. I think some second-order effects of many animal welfare interventions (moral circle expansion) are also positive, and I have no idea how it all shakes out.
Hi Emily,
Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.
our current estimates of the gap between marginal animal and human funding opportunities is very different from the one in your post â within one order of magnitude, not three.
Holden has stated that âIt seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness.â As OP continues researching moral weights, OPâs marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?
Our current moral weights, based in part on Luke Muehlhauserâs past work, are lower.
Along with OPâs neartermist cause prioritization, your comment seems to imply that OPâs moral weights are 1-2 orders of magnitude lower than Rethinkâs. If thatâs true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between FAW and GHW.
Does OP plan to reveal their moral weights and/âor their methodology for deriving them? It seems that opening up the conversation would be quite beneficial to OPâs objective of furthering moral weight research until uncertainty is reduced enough to act upon.
Iâd like to reiterate how much I appreciate your openness to feedback and your replyâs clarification of OPâs disagreements with my post. That said, this reply doesnât seem to directly answer this postâs headline questions:
How much weight does OPâs theory of welfare place on pleasure and pain, as opposed to nonhedonic goods?
Precisely how much more does OP value one unit of a humanâs welfare than one unit of another animalâs welfare, just because the former is a human? How does OP derive this tradeoff?
How would OPâs views have to change for OP to prioritize animal welfare in neartermism?
Though you have no obligation to directly answer these questions, I really wish you would. A transparent discussion could update OP, Rethink, and many others on this deeply important topic.
Thanks again for taking the time to engage, and for everything you and OP have done to help others :)
I didnât cite a single studyâI cited a comment which referenced several studies, and quoted one of them.
I agree with your caveat about neuron counts, though I still think people should update upon an order of magnitude difference in neuron count. Do you have a better proposal for comparing the moral worth of a human fetus and an adult chicken?
I think the argument that abortion reduction doesnât measure up to animal welfare in importance is an isolated demand for rigor. I agree that the best animal welfare interventions are orders of magnitude more cost-effective than the best abortion reduction interventions. However, you could say the same for GiveWell top charities, Charity Entrepreneurship global health charities, or any other charity in global health.
A more precise reference class would be global health charities that reduce child mortality, like AMF.
Denial of fetal personhood typically leads to implausible conclusions regarding how we may treat infants and severely disabled humans, and arguably to a denial of human equality even among non-disabled adults. Even if these conclusions are accepted, most people would accept that these are appreciable bullets to bite â especially for those effective altruists who are invested in preventing infant mortality.
Agreed. In fact, animal-inclusive altruists make very similar arguments for why animals merit moral consideration. As Dale points out in âBlind Spots: Compartmentalizingâ, itâs unclear why these arguments from marginal cases would apply for animals and not for fetuses.
Arguments that abortion is permissible even if the child has full moral status typically rely on the claims that abortion is letting die, rather than killing, and that there is no duty to assist the child to rescue it from death.
Not only is it killingâif the fetus is sentient, itâs likely quite painful. Here are some descriptions of surgical abortion methods:
Labor Induction (20+ weeks gestation): The fetus is administered a lethal injection with no anesthesia, often of potassium chloride, which causes cardiac arrest and death within a minute. Potassium chloride is also used (with anesthesia) for the death penalty. If the fetus is sentient, this is âexcruciatingly painfulâ because potassium chloride âinflames the potassium ions in the sensory nerve fibers, literally burning up the veins as it travels to the heart.â
Dilation & Evacuation (13-24 weeks): The fetusâs arms and legs are torn off by forceps before the fetusâs head is crushed.
(As titotal points out, the vast majority of abortions occur before this. Still, hundreds of thousands of these surgical procedures occur each year.)
Thomsonâs violinist
A further objection to Thomsonâs violinist is that the person wakes up with the violinist attached through no action of their own. In cases of consensual sex, the risk that conception could occur is known. A more precise analogy would be rolling a die while knowing that if the die lands on 1, then the violinist will be attached to you. In that case, unplugging the violinist seems wrong.
According to the sources on wikipedia, Brain synapses in foetuses do not form until week 17, and the first evidence of âminimal consciousness and ability to feel painâ does not occur until week 30.
This comment from a pro-choice author on my post on abortion discusses lines of evidence for the different views on when fetal pain arises. It seems to corroborate Calumâs perspective that Wikipedia editors are biased. From one of its linked studies (Derbyshire et al): âOverall, the evidence, and a balanced reading of that evidence, points towards an immediate and unreflective pain experience mediated by the developing function of the nervous system from as early as 12 weeks.â
Even if we grant some moral weight to a 15 week old foetus (which Iâm dubious of), itâs hard to see a logical reason why it would approach the morally significance of an adult chicken.
A 15-week fetus has an order of magnitude more neurons than an adult chicken. (Red junglefowl, the wild relative of chickens, have 221 million neurons, while 13-week fetuses have 3 billion brain cells. Since humans have a near 1:1 neuron-glia ratio, a 13-week fetusâs neuron count should be an order of magnitude greater than a chickenâs.) A chicken also has an underdeveloped cortex relative to mammals, which somewhat corresponds to the fetusâs developing cortex.
If anything, Iâd bet in favor of a 15 week fetus having more moral significance than an adult chicken rather than less.
organ transplant is a systemic problem and by donating you are helping kickstart a trend that fixes the system. However, having more kidney donors, while a boost in overall QALY equivalent to donating a few thousand dollars, is more than likely to harm people who need kidney transplants in the long run.
...
By addressing the organ transplant problem now, you are actively diminishing the pool of money and the pool of candidates for teams working to improve organ transplants.
Thanks! Might be good to also edit your post to put this summary at the top so that readers immediately see it.
+1 to the interest in these reading lists.
Because my job is very time-consuming, I havenât spent much time trying to understand the state of the art in AI risk. If there was a ready-made reading list I could devote 2-3 hours per week to, such that itâd take me a few months to learn the basic context of AI risk, thatâd be great.
Yes, I agree with that caveat.
(Disclaimer: I take RPâs moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)
Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence weâve seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?
In âWorldview Diversificationâ (2016), Holden Karnofsky wrote that âIf one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF].â In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.
Letâs say RPâs moral weights are wrong by an order of magnitude, and chickensâ experiences actually only have 3% of the moral weight of human experiences. Letâs say further that some remarkably non-hedonic preference view is true, where hedonic goods/âbads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.
While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RPâs research with respect to cause prioritization still hold up after incorporating the arguments youâve enumerated in your post.
I appreciate that, and I agree with you!
However, as far as Iâm aware, EA-recommended family planning interventions do decrease the amount of children people have. If these charities benefit farmed animals (and I believe they do), decreasing the human population is where these charitiesâ benefits for farmed animals come from.
Iâve estimated that both MHI and FEM prevent on the order of 100 pregnancies for each maternal life they save. Unless my estimates are way too high (please let me know if theyâre wrong; Iâm happy to update!), even if only a very small percentage of these pregnancies would have resulted in counterfactual births, both of these charities would still on net decrease the amount of children people have.
Itâs noteworthy that if the procreation asymmetry is rejected, the sign of family planning interventions is the opposite of the sign of lifesaving interventions like AMF. Thus, those who support AMF might not support family planning interventions, and vice versa.
For what itâs worth, both Holden and Jeff express considerable moral uncertainty regarding animals, while Eliezer does not. Continuing Holdenâs quote:
My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I donât think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I donât want us to pass up the potentially considerable opportunities to improve their welfare.
I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.
I agree with you that itâs quite difficult to quantify how much Eliezerâs views on animals have influenced the rationalist community and those who could steer TAI. However, I think itâs significantâif Eliezer were a staunch animal activist, I think the discourse surrounding animal welfare in the rationalist community would be different. I elaborate upon why I think this in my reply to Max H.
I apologize for phrasing my comment in a way that made you feel like that. I certainly didnât mean to insinuate that rationalists lack âagency and ability to think criticallyââI actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezerâs writings.
I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I donât believe that. Please allow me to enumerate my specific claims and their justifications:
Caring about animal welfare is important (99% confidence): Hereâs the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and itâs corroborated here by many disinterested parties.â
Eliezerâs views on animal welfare have had significant influence on views of animal welfare in rationalist cultureâ (75% confidence):
A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezerâs views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
My only pushback is the experience Iâve had engaging with rationalists and reading LessWrong, where Iâve just seen rationalists reflecting Eliezerâs views on many domains other than ârationality: A-Zâ over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isnât the only influential EA/ârationalist who believes this, and he didnât originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
NYT: âtwo of the worldâs prominent A.I. labs â organizations that are tackling some of the tech industryâs most ambitious and potentially powerful projects â grew out of the Rationalist movement...Elon Musk â who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment â founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.â
Sam Altman:âcertainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etcâ.
On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimismâespecially about the prospects for animalsâare serious enough that having TAI steerers care about animals is very important.
Agreed on avoiding harming insects!
Though itâs commendable to try to help insects, putting a bug in the trash might be negative, because that increases insect populations, and insects might lead negative lives: https://ââwww.simonknutsson.com/ââhow-good-or-bad-is-the-life-of-an-insect
Avoiding silk, shellac, and carmine also helps reduce suffering for many insects: https://ââwww.wikihow.fitness/ââAvoid-Hurting-Insects