I’m a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
Ariel Simnegar
Funnily enough, that verse is often referenced to me by religious Jews when I talk about how many EAs donate >>20%.
MISHNA: Rabbi GWWC said in the name of Rabbi Singer: It is a mitzvah (good deed) to pledge 10%, but one is not required to take upon himself the chumra (stringency) of the Further Pledge.
GEMARA: Rava asks: One who takes the Further Pledge can be compared to the Nazirite, who is called a sinner, for he is depriving himself of what the Holy One, Blessed be He, has provided him. So how can Rabbi GWWC say that one who takes Further Pledge is a righteous man?
Abaye says in the name of Rabbi Singer: The mashal (parable) of the drowning child brings down that one is obligated to give up all of one’s possessions to save another’s life. For this reason Rabbi GWWC says one who takes the Further Pledge is a righteous man. As Scripture teaches us, “one who saves a life is as though he has saved the world entire”.
Rava asks: But why then is 10% sufficient, if it is brought down that one must give up all of one’s posessions to save a life?
Abaye says: In the matter of the city of Sodom, the Lord says that “for the sake of 10 righteous men, I would not destroy it”. By homiletic interpretation, if one donates even 10%, for his sake the world will be spared.
I agree that clinicians should use lidocaine or digoxin over potassium chloride (KCL) for the reason you gave.
I wrote that the injection is “often of potassium chloride”, not always.
-
Given that the fetus is receiving a lethal dose of potassium chloride, I don’t think adults tolerating a much smaller medicinal dose should tell us much about how painful a lethal dose would be?
I agree that the fetus isn’t being given potassium chloride intravenously, although I didn’t know that when I wrote the post (another commenter pointed it out). I’ll add a line in the post disclaiming that comparison.
Happy to hear we agree on fetal anesthesia :)
I also very much agree that there’s no conflict between this and the pro-choice position, and that increased abortion access would reduce fetal suffering in late-term abortions. (Although increasing abortion access has other, larger ethical problems—from a total utilitarian perspective, there doesn’t seem to be much difference between preventing a fetus from living a full life and doing the same for an infant or adult.)
On comparing individual fetuses to individual farm animals, it’s worth noting that a 13-week fetus has about half as many neurons as an adult cow. (Cows have 3 billion neurons, while 13-week fetuses have 3 billion brain cells. Since humans have a near 1:1 neuron-glia ratio, a 13-week fetus’s neuron count should be about half as many as a cow’s.) So on at least one metric, they’d be pretty comparable. Of course, I’m pretty sure this fact is swamped by the other facts about factory farming you gave.
I agree that this probably wouldn’t be competitive with animal welfare. However, if we’re holding it to the standard for suffering-reducing interventions for humans, it could plausibly be more competitive.
This description of labor induction abortion says:
The skin on your abdomen is numbed with a painkiller, and then a needle is used to inject a medication (digoxin or potassium chloride) through your abdomen into the fluid around the fetus or the fetus to stop the heartbeat.
That sounds like local anesthesia for the mother, which from what I understand is achieved through an injection which numbs the tissue in a specific area rather than through an IV drip. So I don’t think this protocol would have any anesthetic effect on the fetus, though I’m not a medical expert and could be wrong.
Based on this, I think the sentence “The fetus is administered a lethal injection with no anesthesia” is accurate.
Thanks for that info! I didn’t know that.
Thanks for this! I agree that apart from speciesism, there isn’t a good reason to prioritize GHD over animal welfare if targeting suffering reduction (or just directly helping others).
Would you mind expanding further on the goals of the “reliable global capacity growth” cause bucket? It seems to me that several traditionally longtermist / uncategorized cause areas could fit into this bucket, such as:
Under your categorization, would these be included in GHD?
It also seems that some traditionally GHD charities would fall into the “suffering reduction” bucket, since their impact is focused on directly helping others:
Fistula Foundation
StrongMinds
Under your categorization, would these be included in animal welfare?
Also, would you recommend that GHD charity evaluators more explicitly change their optimization target from metrics which measure directly helping others / suffering reduction (QALYs, WELLBYs) to “global capacity growth” metrics? What might these metrics look like?
It might be that the strongest reason to prioritize GHD is because of flow-through effects, as you’ve suggested. But I don’t think that those who prioritize GHD generally actually do so for that reason. They care about saving and improving people’s lives in the near term, and the units they use (QALYs, income doublings, WELLBYs) and stories they tell (the drowning child) reflect that.
If GHD was trying to optimize for robustly increasing long-term human capacity, I think the GHD portfolio of interventions would look very different. It might include certain longtermist cause areas such as improving institutional decisionmaking. It would be surprising if the best interventions when optimizing for longterm flow-through effects were also the best when optimizing for immediate effects on individuals. If you’re optimizing for flow-through effects, I agree that it’s non-obvious whether GHD or AW is better, but I think you probably shouldn’t be donating to either of those!
I think GHD donors choose GHD over AW simply because they care overwhelmingly more about humans than nonhuman animals. That’s also why they usually ignore animal effects in their cost-effectiveness analyses, even though those effects would swamp the effects on humans for many GHD interventions. If they were trying to impartially help others in the near term, they would choose AW.
Here’s a classification of GHD/AW which I think is more relevant to neartermists’ revealed preferences: The best impartial neartermist interventions are AW. The best neartermist interventions ignoring nonhuman animals are GHD. Under that classification, fetal welfare would be GHD.
I very much agree that it’s a clear moral improvement unless there’s some strong countervailing consideration. I would guess the greatest practical difficulty would be the intervention’s adjacency to politically contentious issues, which might make it intractable.
fwiw, I think a better comparison would be leading animal welfare interventions
I agree that there are many similarities between this proposal and animal welfare interventions. However, since I think the best animal welfare interventions are orders of magnitude more effective than GHD, I’d far rather GHD funding be diverted to this than animal welfare funding. I also just don’t think this intervention would be anywhere near the animal welfare cost-effectiveness bar, though it could conceivably pass the global health bar.
The Scale of Fetal Suffering in Late-Term Abortions
Thanks for putting this post together. It takes fortitude to commit so much to an altruistic project, and it takes integrity to make this decision and write up this explanation.
Insightful and well-argued post!
I found the hypothetical about NYT and CEA helpful for reasoning from first principles about acceptable journalistic practice. I came out of it empathizing more with Nonlinear’s feelings before and during the publication of Ben Pace’s article than I previously had.
Regarding Ben Pace’s explicit seeking of negative information and unwillingness to delay posting, you updated me from thinking of these as simple mistakes to now considering them egregiously bad.
Great point that an article author can’t just state their disclaimers at the top and expect readers to rationally recalibrate themselves and ignore the vibes of the evidence’s presentation.
I found it hard to update throughout this story because the presentation of evidence from both parties was (understandably) biased. As you pointed out, “Sharing Information About Nonlinear” presented sometimes true claims in a way which makes the reader unsympathetic to Nonlinear. Nonlinear’s response presented compelling rebuttals in a way which was calculated to increase the reader’s sympathy for Nonlinear. Both articles intentionally mix the evidence and the vibes in a way which makes it difficult to readers to separate the two. (I don’t blame Nonlinear’s response for this as much, since it was tit for tat.)
Thanks again for putting so much time and effort into this, and I’m excited to see what you write next.
Hi! I’m assuming that by “this” you mean the post’s argument, “wild animals” you mean wild animal welfare research, and “stray domestic animals” you mean pet shelters. In that case, I think the post’s argument might apply to wild animal welfare research, depending upon one’s model of the effects of that research. However, I think this post’s argument is unlikely to apply to pet shelters.
Comparing area was intended :)
If it’s unclear, I can add a note which says the circles should be compared by area.
Thanks so much David! :)
Agreed on avoiding harming insects!
Though it’s commendable to try to help insects, putting a bug in the trash might be negative, because that increases insect populations, and insects might lead negative lives: https://www.simonknutsson.com/how-good-or-bad-is-the-life-of-an-insect
Avoiding silk, shellac, and carmine also helps reduce suffering for many insects: https://www.wikihow.fitness/Avoid-Hurting-Insects
Thanks for the compliment :)
When I write “skepticism of formal philosophy”, I more precisely mean “skepticism that philosophical principles can capture all of what’s intuitively important”. Here’s an example of skepticism of formal philosophy from Scott Alexander’s review of What We Owe The Future:
I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If that’s true, I will just not do that, and switch to some other set of axioms. If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity, I will just refuse to extend things to infinity...I realize this is “anti-intellectual” and “defeating the entire point of philosophy”.
You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my post’s analogy is crude and there are many reasons why people’s dispositions might favor AI x-risk reduction over animal welfare.
Thanks Gage!
That’s a good point I hadn’t considered! I don’t think that’s OP’s crux, but it is a coherent explanation of their neartermist cause prioritization.
Absolutely! Most of what’s important in this essay is just a restatement of your inspiring CEA from months ago :)
I’d like to give some context for why I disagree.
Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he’s admitted that “I truly sucked back then”. However, I think EA causes are more important than political differences. It’s valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we’re being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.
I also think Hanania has excellent takes on most issues, and that’s because he’s the most intellectually honest blogger I’ve encountered. I think Hanania likes EA because he’s willing to admit that he’s imperfect, unlike EA’s critics who would rather feel good about themselves than actually help others.
More broadly, I think we could be doing more to attract people who don’t hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:
In this era of political polarization, It would be a travesty for EA issues to become partisan.
All else equal, political diversity is good for community epistemics. In that regard, it should be encouraged for much the same reason that cultural and racial diversity are encouraged.
If we want EA to be a global social movement, we need to show that one can be EA even if they hold beliefs on other issues we find repugnant. I live in Panama for my job. When I arrived here, I had a culture shock from how backwards many people’s views are on racism and sexism. If we can’t be friends with the person next door with bad views, how are we going to make allies globally?