Anything I write here is written purely on my own behalf, and does not represent my employerâs views (unless otherwise noted).
Erich_Grunewald đ¸
He reframes EA concepts in a more accessible way, such as replacing âcounterfactualsâ with the sports acronym âVORPâ (Value Over Replacement Player).
And here I was thinking hardly a soul read my suggesting this framing âŚ
Thanks for writing this, itâs very interesting.
Instead, I might describe myself as a preferentialist or subjectivist about what matters, so that whatâs better is just whatâs preferred, or what would be better according to our preferences, attitudes or ways of caring, in general.
This sounds similar to Christine Korsgaardâs (Kantian) view on value, where things only matter because they matter to sentient beings (people, to Kant). I think I was primed to notice this because I remember you had some great comments on my interview with her from four years ago.
Quoting her:
Utilitarians think that the value of people and animals derives from the value of the states they are capable of â pleasure and pain, satisfaction and frustration. In fact, in a way it is worse: In utilitarianism, people and animals donât really matter at all; they are just the place where the valuable things happen. Thatâs why the boundaries between them do not matter. Kantians think that the value of the states derives from the value of the people and animals. In a Kantian theory, your pleasures and pains matter because you matter, you are an âend in yourselfâ and your pains and pleasures matter to you.
I guess âutilitarianismâ above could be replaced with âhedonismâ etc. and it would sort of match your writing that hedonism etc. is âguilty [...] of valuing things in ways that donât match how we care about thingsâ. Anyway, she discusses this view in much greater detail in Fellow Creatures.
See also St. Jules, 2024 and Roelofs, 2022 (pdf) for more on ways of caring and moral patienthood, using different terminology.
Fyi, the latter two of these links are broken.
Thanks!
The correct âmoral fixâ isnât âdonât get mail,â itâs âdonât kick dogs.â Do you share this intuition of non-responsibility?
Iâm also not a philosopher, but I guess it depends on what your options are. If your only way of influencing the situation is by choosing whether or not to get mail, and the dog-kicking is entirely predictable, you have to factor the dog-kicking into the decision. Of course the mailman is ultimately much more responsible for the dog kicking than you are, in the sense that your action is one you typically wouldnât expect to cause any harm, whereas his action will always predictably cause harm. (In the real world, obviously there are likely many ways of getting the mailman to stop kicking dogs that are better than giving up mail.)
Iâm not sure whether it makes sense to think of blameworthy actions as wrong by definition. It probably makes more sense to tie blameworthiness to intentions, and in that case an action could be blameworthy even though it has good consequences, and even though endorsing it leads to good consequences. Anyway, if so, obviously the mailman is also much more blameworthy than you, given that he presumably had ill intentions when kicking the dog, whereas you had no ill intentions when getting your mail delivered.
Not a Meat Eater FAQ
To clarify, I think Iâm ok with having a taboo on advocacy against âit is better for the world for innocent group X of people not to existâ, since that seems like the kind of naive utilitarianism we should definitely avoid. Iâm just against a taboo on asking or trying to better understand whether âit is better for the world for innocent group X of people not to existâ is true or not. I donât think Vasco was engaging in advocacy, my impression was that he was trying to do the latter, while expressing a lot of uncertainty.
Thanks, that is a useful distinction. Although I would guess Vasco would prefer to frame the theory of impact as âfind out whether donating to GiveWell is net positive â help people make donation choices that promote welfare betterâ or something like that. I buy @Richard Y Chappellđ¸âs take that it is really bad to discourage others from effective giving (at least when itâs done carelessly/ânegligently), but imo Vasco was not setting out to discourage effective giving, or it doesnât seem like that to me. He isâIâm guessingâcooperatively seeking to help effective givers and others make choices that better promote welfare, which they are presumably interested in doing.
There are obviously some cruxes hereâincluding whether there is a moral difference between actively advocating for others not to hand out bednets vs. passively choosing to donate elsewhere /â spend on oneself, and whether there is a moral difference between a bad thing being part of the intended MoA vs. a side effect. I would answer yes to both, but I have lower consequentialist representation in my moral parliament than many people here.
Yes, I personally lean towards thinking the act-omission difference doesnât matter (except maybe as a useful heuristic sometimes).
As for whether the harm to humans is incidental-but-necessary or part-of-the-mechanism-and-necessary, Iâm not sure what difference it makes if the outcomes are identical? Maybe the difference is that, when the harm to humans is part-of-the-mechanism-and-necessary, you may suspect that itâs indicative of a bad moral attitude. But I think the attitude behind âI wonât donate to save lives because I think it creates a lot of animal sufferingâ is clearly better (since it is concerned with promoting welfare) than the attitude behind âI wonât donate to save lives because I prefer to have more income for myselfâ (which is not).
Even if one would answer no to both cruxes, I submit that âno endorsing MoAs that involve the death of innocent peopleâ is an important set of side rails for the EA movement. I think advocacy that saving the lives of children is net-negative is outside of those rails. For those who might not agree, Iâm curious where they would put the rails (or whether they disagree with the idea that there should be rails).
I do not think it is good to create taboos around this question. Like, does that mean we shouldnât post anything that can be construed as concluding that itâs net harmful to donate to GiveWell charities? If so, that would make it much harder to criticise GiveWell and find out what the truth is. What if donating to GiveWell charities really is harmful? Shouldnât we want to know and find out?
To me, any moral theory that dictates that innocent children should die is probably breaking apart at that point. Instead he bites the bullet and assumes that the means (preventing suffering) justifies the ends (letting innocent children die). I am sorry to say that I find that morally repugnant. [...] Instead, I have a strong sense that innocent children should not be let die. If my moral theory disagrees with the strong ethical sense, it is the strong ethical sense that should guide the moral theory, and not the other way around.
Hmm, but we are all letting children die all the time from not donating. I am donating just 15% of my income; I could certainly donate 20-30% and save additional lives that way. I think my failing to donate 20-30% is morally imperfect, but I wouldnât call it repugnant. What is it that makes âI wonât donate to save lives because I think it creates a lot of animal sufferingâ repugnant but âI wonât donate to save lives because I prefer to have more income for myselfâ not?
Thanks, thatâs encouraging! To clarify, my understanding is that beef cattle are naturally polled much more frequently than dairy cattle, since selectively breeding dairy cattle to be hornless affects dairy production negatively. If I understand correctly, thatâs because the horn growing gene is close to genes important for dairy production. And that (the hornless dairy cow problem) seems to be what people are trying to solve with gene editing.
Thanks. I take you to say roughly that you have certain core beliefs that youâre unwilling to compromise on, even if you canât justify those beliefs philosophically. And also that you think itâs better to be upfront about that than invent justifications that arenât really load-bearing for you. (Let me know if thatâs a misrepresentation.)
I think itâs virtuous that youâre honest about why you disagree (âI place much lower weight on animalsâ) and I think thatâs valuable for discourse in that it shows where the disagreement lies. I donât have any objection to that. But I also think that saying you just believe that and canât/âwonât justify it (âI cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of viewsâ) is not particularly valuable for discourse. It doesnât create any opening for productive engagement or movement toward consensus. I donât think itâs harmful exactly, I just think more openness to examining whether the intuition withstands scrutiny would be more valuable.
(That is a question about discourse. I think thereâs also a separate question about the soundness of the decision procedure you described in your original comment. I think itâs unsound, and therefore instrumentally irrational, but Iâm not the rationality police so I wonât get into that.)
Thanks, I fixed the link. And the rest of your comment seems right to me.
My actual reason to disagree is that I place much lower weight on animals than you, and I would axiomatically reject any moral weight on animals that implied saving kids from dying was net negative. I cannot give a tight philosophical defence of that view, but I am more committed to it than I am to giving tight philosophical defences of views. I suspect that if GiveWell were to publish a transparent argument as to why they ignore those effects, it would look similar to my argumentâshort and unsatisfactory to you. (Note; I work at GiveWell but this is my own view.)
I upvoted this comment for honesty, but this passage reads to me like committing to a conclusion (âsaving kids from dying cannot be net negativeâ) and then working its way backward to reject the premise (âanimals matter morallyâ, âsaving kids from dying causes more (animal) suffering than it creates (human) welfareâ) that leads to a contradictory conclusion. That seems like textbook motivated reasoning to me? It doesnât seem like a good way of doing moral reasoning. I think it would be better to either reject the premise or to argue that the desired conclusion can follow from the premise after all.
Personally I think itâs very much not obvious whether the meat eating problem is genuine. But given that the goodness of a very large part of the EA project so far hinges on it not being real, and given that itâs far from obvious whether itâs real, I think it would be useful to make progress on that question. So Iâm glad that @Vasco Grilođ¸ and others are trying to make progress on it and a little discouraged to see some pushback (from several commenters) that doesnât really engage with Vascoâs arguments/âcalculations.
(It does seem like, as @Ben Millwoodđ¸ has commented, any harm caused to animals by donating to global health charities is much smaller than the harm of not giving to animal charities. So maybe a better and more palatable framing for the meat eating problem is not, âIs giving to global health charities net negative/âpositive?â but âIs giving to global health charities more/âless cost-effective than giving to animal charities?â)
Karthik could also believe that any attempt to persuade someone to do what Karthik believes is best, would backfire, or that it is intrinsically wrong to persuade another person to do what Karthik believes is good, if they do not already believe the thing is good anyway. Though I agree with the general thrust of your comment.
Would it be feasible/âuseful to accelerate the adoption of hornless (ânaturally polledâ) cattle, to remove the need for painful dehorning?
There are around 88M farmed cattle in the US at any point in time, and Iâm guessing about an OOM more globally. These cattle are for various reasons frequently dehornedâabout 80% of dairy calves and 25% of beef cattle are dehorned annually in the US, meaning roughly 13-14M procedures.
Dehorning is often done without anaesthesia or painkillers and is likely extremely painful, both immediately and for some time afterwards. Cattle horns are filled with blood vessels and nerves, so itâs not like cutting nails. It might feel something like having your teeth amputated at the root.
Some breeds of cows are ânaturally polledâ, meaning they donât grow horns. There have been efforts to develop hornless cattle via selective breeding, and some breeds (e.g., Angus) are entirely hornless. So there is already some incentive to move towards hornless cattle, but probably a weak incentive as dehorning is pretty cheap and infrequent. In cattle, thereâs a gene that regulates horn growth, with the hornless allele being dominant. So you can gene edit cattle to be naturally hornless. This seems to be an area of active research (e.g.).
So now Iâm wondering, are there ways of speeding up the adoption of hornless cattle? If all US cattle were hornless, >10M of these painful procedures would be avoided annually. For example, perhaps you could fund relevant gene editing research, advocate to remove regulatory hurdles, or incentivize farmers to adopt hornless cattle breeds? Caveat: I only thought and read about all this for 15 minutes.
Yeah, but as you point out below, that simple model makes some unrealistic assumptions (e.g., that a solution will definitely be found that fully eliminates farmed animal suffering, and that a person starts contributing, in expectation, to solving meat eating at age 0). So it still seems to me that a better argument is needed to shift the prior.
As a first-pass model: removing person-years from the present doesnât reduce the number of animals harmed before a solution is found; it just makes the solution arrive later.
I doubt that is a good way to model this (for farmed animals). Consider the extremes:
If we reduce the human population size to 0, we reduce the amount of suffering of farmed animals to zero, since there will be no more farmed animals
If we increase the human population to the Malthusian limit, we increase the amount of suffering of farmed animals in the short and probably medium terms, and may or may not decrease farmed animal suffering in the longer term. One reason to think we would increase the amount of suffering by adding many more people is that, historically, farmed animal suffering and human population have likely been closely correlated. At any rate, the amount of farmed animal suffering in this scenario is likely nonzero.
So as a first approximation, we should just assume the amount of suffering in factory farms increases monotonically with the human population, since we can be fairly confident in these three data points (no suffering with no humans; lots of suffering with 8B humans; maybe more, maybe less suffering at the Malthusian limit). Of course that would be an oversimplified model. But it is a starting point, and to get from that starting point to âadding people on the margin reduces or doesnât affect expected farmed animal sufferingâ needs a better argument.
Here are the three most popular comments as of now. One, âgiving to effective charities can create poverty in the form of exploited charity workersâ:
Iâve worked for a non-profit in the past at an unlivable wage. One of my concerns when I am looking at charities to give to and hearing that we need to give only to those that are most efficient, is that we are creating more poverty by paying the workers at some charities wages that they canât live on.
Two, âUS charities exist because the rich arenât taxed enoughâ:
Our whole system of charity in the US has developed because the wealthy arenât taxed enough, and hence our government doesnât do enough. Allowing the rich to keep so much wealth means we donât have enough national or state level funding for food, housing, healthcare, or education. We also donât have adequate government programs to protect the environment, conduct scientific research, and support art and culture. Iâm deluged every day by mail from dozens of organizations trying to fill these gaps. But their efforts will never have the impact that well planned longterm government action could.
Three, âI just tip generouslyâ:
Lately Iâve been in the mindset of giving money to anyone who clearly has less than me when I have the opportunity. This mostly means extra generous tipping (when I know the tips go to the workers and not a corporation). Definitely not efficient, but hopefully makes a tiny difference.
These just seem really weak to me. What other options did the underpaid charity workers have, that were presumably worse than working for the charity? Even if the US taxed the rich very heavily, there would still be lots of great giving opportunities (e.g., to help people in other countries, and to help animals everywhere). Tipping generously is sort of admirable, but if itâs admittedly inefficient, why not do the better thing instead? I guess these comments just illustrate that there is a lot of room for the core ideas of effective altruism (and basic instrumental rationality) to gain wider adoption.
Your past self is definitely wrongâGovAI does way more policy work than technical workâbut maybe thatâs irrelevant since you prioritize advocacy work anyway (and GovAI does little of that).
I donât understand why so many are disagreeing with this quick take, and would be curious to know whether itâs on normative or empirical grounds, and if so where exactly the disagreement lies. (I personally neither agree nor disagree as I donât know enough about it.)
From some quick searching, Lessigâs best defence against accusations that he tried to steal an election seems to be that he wanted to resolve a constitutional uncertainty. E.g.,: âIn a statement released after the opinion was announced, Lessig said that âregardless of the outcome, it was critical to resolve this question before it created a constitutional crisisâ. He continued: âObviously, we donât believe the court has interpreted the Constitution correctly. But we are happy that we have achieved our primary objectiveâthis uncertainty has been removed. That is progress.ââ
But it sure seems like the timing and nature of that effort (post-election, specifically targeting Trump electors) suggest some political motivation rather than purely constitutional concerns. As best as I can tell, itâs in the same general category of efforts as Giulianiâs effort to overturn the 2020 election, though importantly different in that Giuliani (a) had the support and close collaboration of the incumbent, (b) seemed to actually commit crimes doing so, and (c) did not respect court decisions the way Lessig did.
Iâm registering a forecast: Within a few months weâll see a new Vasco Grilo post BOTECing that insecticide-treated bednets are net-negative expected value due to mosquito welfare. Looking forward to it. :)