Multiplier Arguments are often flawed
Foreword
Sadly, it looks like the debate week will end without many of the stronger[1] arguments for Global Health being raised, at least at the post level. I donât have time to write them all up, and in many cases they would be better written by someone with more expertise, but one issue is firmly in my comfort zone: the maths!
The point I raise here is closely related to the Two Envelopes Problem, which has been discussed before. I think some of this discussion can come across as âtoo technicalâ, which is unfortunate since I think a qualitative understanding of the issue is critical to making good decisions when under substantial uncertainty. In this post I want to try and demystify it.
This post was written quickly, and has a correspondingly high chance of error, for which I apologise. I am confident in the core point, and something seemed better than nothing.
Two envelopes: the EA version
A commonly-deployed argument in EA circles, hereafter referred to as the âMultiplier Argumentâ, goes roughly as follows:
Under âoddâ but not obviously crazy assumptions, intervention B is >100x as good as intervention A.
You may reasonably wonder whether those assumptions are correct.
But unless you put <1% credence in those assumptions, or think that B is negative in the other worlds, B will still come out ahead.
Because even if itâs worthless 99% of the time, itâs producing enough value in the 1% to more than make up for it!
So unless you are really very (over)confident that those assumptions are false, you should switch dollars/âsupport/âcareer from A to B.
I have seen this for both Animal Welfare and Longtermism as B, usually with Global Health as A. As written, this argument is flawed. To see why, consider the following pair of interventions:
A has produces 1 unit of value per $, or 1000 units per $, with 50â50 probability.
B is identical to A, and independently will be worth 1 or 1000 per $ with 50â50 probability.
We can see that Bâs relative value to A is as follows:
In 25% of worlds, B is 1000x more effective than A
In 50% of worlds, B and A are equally effective.
In 25% of worlds, B is 1/â1000th as effective as A
In no world is B negative, and clearly we have far less than 99.9% credence in A beating B, so B being 1000x better than A in its favoured scenario seems like it should carry the day per the Multiplier Argument...but these interventions are identical!
What just happened?
The Multiplier Argument relies on mathematical sleight of hand. It implicitly calculated the expected ratio of impact between B and A, and the expected ratio in the above example is indeed way above 1:
E(B/âA) = 25% * 1000 + 50% * 1 + 25% * 1/â1000 = 250.5
But the difference in impact, or E(B-A), which is what actually counts, is zero. In 25% of worlds we gain 999 by switching from A to B, in a mirror set of worlds we lose 999, and in the other 50% there is no change.
Tl;DR: Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.
In fact, we could use a Multiplier Argument to construct a seemingly-overwhelming argument for switching from A to B, and then use the same argument to argue for switching back again! Which is essentially the classic Two Envelopes Problem.
Some implications
One implication is that you cannot, in general, ignore the inconvenient sets of assumptions where your suggested intervention B is losing to intervention A. You need to consider Aâs upside cases directly, and how the value being lost there compares to the value being gained in Bâs upside cases.
If A has a fixed value under all sets of assumptions, the Multiplier Argument works. One post argues this is true in the case at hand. I donât buy it, for reasons I will get into in the next section, but I do want to acknowledge that this is technically sufficient for Multiplier Arguments to be valid, and I do think some variant of this assumption is close-enough to true for many comparisons, especially intra-worldview comparisons.
But in general, the worlds where A is particularly valuable will correlate with the worlds where it beats B, because that high value is helping it beat B! My toy example did not make any particular claim about A and B being anti-correlated, just independent. Yet it still naturally drops out that A is far more valuable in the A-favourable worlds than in the B-favourable worlds.
Global Health vs. Animal Welfare
Everything up to this point I have high confidence in. This section I consider much more suspect. I had some hope that the week would help me on this issue. Maybe the comments will, otherwise âsee you next timeâ I guess?
Many posts this week reference RPâs work on moral weights, which came to the surprising-to-most âEquality Resultâ: chicken experiences are roughly as valuable as human experiences. The world is not even close to acting as if this were the case, and so a >100x multiplier in favour of helping chickens strikes me as very credible if this is true.
But as has been discussed, RP made a number of reasonable but questionable empirical and moral assumptions. Of most interest to me personally is the assumption of hedonism.
I am not a utilitarian, let alone a hedonistic utilitarian. But when I try to imagine a hedonistic version of myself, I can see that much of the moral charge that drives my Global Health giving would evaporate. I have little conviction about the balance of pleasure and suffering experienced by the people whose lives I am attempting to save. I have much stronger conviction that they want to live. Once I stop giving any weight to that preference [2], my altruistic interest in saving those lives plummets.
To re-emphasise the above, down-prioritising Animal Welfare on these grounds does not require me to have overwhelming confidence that hedonism is false. For example a toy comparison could[3] look like:
In 50% of worlds hedonism is true, and Global Health interventions produce 1 unit of value while Animal Welfare interventions produce 500 units.
In 50% of worlds hedonism is false, and the respective amounts are 1000 and 1 respectively.
Despite a 50%-likely âhedonism is trueâ scenario where Animal Welfare dominates by 500x, Global Health wins on EV here.
Conclusion
As far as I know, the fact that Multiplier Arguments fail in general and are particularly liable to fail where multiple moral theories are being consideredâas is usually the case when considering Animal Welfareâis fairly well-understood among many longtime EAs. Brian Tomasik raised this issue years ago, Carl Shulman makes a similar point when explaining why he was unmoved by the RP work here, Holden outlines a parallel argument here, and RP themselves note that they considered Two Envelopes âat lengthâ.
It is not, in isolation, a âdefeaterâ of animal welfare, as a cursory glance at the prioritisation of the above would tell you. I would though encourage people to think through and draw out their tables under different credible theories, rather than focusing on the upside cases and discarding the downside ones as the Multiplier Argument pushes you to do.
You may go through that exercise and decide, as some do, that the value of a human life is largely invariant to how you choose to assign moral value. If so, then you can safely go where the Multiplier Argument takes you.
Just be aware that many of us do not feel that way.
- ^
Defined roughly as âthe points Iâm most likely to hear and give most weight to when discussing this with longtime EAs in personâ.
- ^
Except to the extent itâs a signal about the pleasure/âsuffering balance I suppose. I donât think it does provides much information though; people generally seem to have a strong desire to survive in situations that seem to me to be very suffering-dominated.
- ^
For the avoidance of doubt, to the extent I have attempted to draw this out my balance of credences and values end up a lot more messy.
Iâd be interested in hearing more of why you believe global health beats animal welfare on your views. It sounds like itâs about placing a lot of value on peopleâs desires to live. How are you making comparisons of desire strength in general between individuals, including a) between humans and other animals, and b) between different desires, especially the desire to live and other desires?
Personally, I think thereâs a decent case for nonhuman animals mattering substantially in expectation on non-hedonic views, including desire and preference views:
I think itâs not too unlikely that nonhuman animals have access to whatever general non-hedonic values you care about, e.g. chickens probably have (conscious) desires and preferences, and thereâs a decent chance shrimp and insects do, too (more here on sophisticated versions of desires and preferences in other animals), and
if they do have access to them, itâs not too unlikely that
their importance reaches heights in nonhumans that are at least a modest fraction of what they do in humans, e.g. by measuring their strength using measures of attention or effects on attention or human-based units, or
interpersonal comparisons arenât possible for those non-hedonic values, between species and maybe even just between humans, anyway (more here and here), so
we canât particularly justify favouring humans or justify favouring nonhumans, and so we just aim for something like Pareto efficiency, across species or even across all individuals, or
we normalize welfare ranges or capacities for welfare based on their statistical properties, e.g. variance or range, which Iâd guess favours animal welfare, because
it will treat all individuals â humans and other animals â as if they have similar welfare ranges or capacities for welfare or individual value at stake, and
far greater numbers of life-years and individuals are helped per $ with animal welfare interventions.
I also discuss this and other views, including rights-based theories, contractualism, virtue ethics and special obligations, in this section of the piece of mine that you cited.
Hi Michael,
Sorry for putting off responding to this. I wrote this post quickly on a Sunday night, so naturally work got in the way of spending the time to put this together. Also, I just expect people to get very upset with me here regardless of what I say, which I understandâfrom their point of view Iâm potentially causing a lot of harmâbut naturally causes procrastination.
I still donât have a comprehensive response, but I think there are now a few things I can flag for where Iâm diverging here. I found titotalâs post helpful for establishing the starting point under hedonism:
However, even before we get into moral uncertainty I think this still overstates the case:
Animal welfare (AW) interventions are much less robust than the Global Health and Development (GHD) interventions animal welfare advocates tend to compare them to. Most of them are fundamentally advocacy interventions, which I think advocates tend to overrate heavily.
How to deal with such uncertainty has been the topic of much debate, which I canât do justice to here. But one thing I try to do is compare apples-to-apples for robustness where possible; if I relax my standards for robustness and look at advocacy, how much more estimated cost-effectiveness do I get in the GHD space? Conveniently, I currently donate to Giving What We Can as âEffective Giving Advocacyâ and have looked into their forward-looking marginal multiplier a fair bit; I think itâs about 10x. Joel Tan looked and concluded 13x. Iâve checked with others who have looked at GWWC in detail; theyâre also around there. Iâve also seen 5x-20x claims for things like lead elimination advocacy, but I havenât looked into those claims in nearly as much detail.
Overall I think that if youâre comfortable donating to animal welfare interventions, comparing to AMF/âGivewell âTop Charitiesâ is just a mistake; you should be comparing to the actual best GHD interventions under your tolerance for shaky evidence, which will have estimated cost-effectiveness 10x higher or possibly even more.
Also, I subjectively feel like AW is quite a bit less robust than even GHD advocacy; thereâs a robustness issue from advocacy in both cases, but AW also really struggles with a lack of feedback loopsâwe canât ask the animals how they feelâand so I think is much more likely to end up causing harm on its own terms. I donât know how to quantify this issue, and it doesnât seem like a huge issue for cage-free specifically, so will set this aside. Back when AW interventions were more about trying to end factory farming rather than improving conditions on factory farms it did worry me quite a bit.
As I noted in my comment under that post, Open Phil thinks the marginal FAW opportunity going forward is around 20% of Saulius, not 60% of Saulius; I havenât seen anything that would cause me to argue with them on this, and this cuts the gap by 3x.
Another issue is around âpay it forwardâ or ârippleâ effects, where helping someone enables them to help others, which seem to only apply to humans not animals. Iâm not looking at the long-term future here, just the next generation or so; after that I tend to think the ripples fade out. But even over that short time, the amount of follow-on good a life saved can do seems significant, and probably moves my sense of things by small amount. Still, itâs hard to quantify and Iâll set this aside as well.
After the two issues I am willing to quantify weâre down to around 3.3x, and weâre still assuming hedonism.
On the other hand, I have the impression that RP made an admirable effort to tend towards conservatism in some empirical assumptions, if not moral ones. I think Open Phil also tends this way sometimes. So Iâm not as sure as I usually would what happens if somebody looks more deeply; overwhelmingly I would say EA has found that interventions get worse the more you look at them, which is a lot of why I penalise non-robustness in the first place, but perhaps Open Phil + RP have been conservative enough that this isnât the case?
***
Still, my overall guess is that if you assume hedonism AW comes out ahead. I am not a moral realist; if people want to go all-in on hedonism and donate to AW on those grounds, I donât see that I have any grounds to argue with them. But as my OP alluded to, I tend to think there is more at stake /â humans are âworth moreâ in the non-hedonic worlds. So when I work through this I end up underwhelmed by the overall case.
***
This brings us to the much thornier territory of moral uncertainty. While continuing to observe that Iâm out of my depth philosophically, and am correspondingly uncertain how best to approach this, some notes on how I think about this and where I seem to be differing:
I find experience machine thought experiments, and peopleâs lack of enthusiasm for them, much more compelling than âtortured Timâ thought experiements for trying to get a handle on how much of what matters is pleasure/âsuffering. The issue I see with modelling extreme suffering is that it tends to heavily disrupt non-hedonic goods, and so itâs hard to figure out how much of the badness is the suffering versus the disruption. We can get a sense of how much people care about this disruption from their refusal to enter the experience machine; a lot of the rejections I see and personally feel boil down to âIâm maxing out pleasure but losing everything that âactually mattersââ.
RP did mention this but I found their handling unconvincing; they seem to have very different intuitions to me for how much torture compromises human ability to experience what âactually mattersâ. Empirical evidence from people with chronic nerve damage is similarly tainted by the fact that e.g. friends often abandon you when youâre chronically in pain, you may have to drop hobbies that meant a lot to you, and so on.
Iâve been lucky enough never to experience anything that severe, but if I look at the worst periods of my life it certainly seemed like a lot more impact came from these âsecondaryâ effectsâinterference with non-hedonic goodsâthan from the primary suffering. My heart goes out to people who are dealing with worse conditions and very likely taking larger âsecondaryâ hits.
I also just felt like the Tortured Tim thought experiment didnât âlandâ even on its own terms for me, similar to the sentiments expressed in this comment and this comment.
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this:
However, if weâre assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.
I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say:
If you weigh desires/âpreferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RPâs take. And if you weigh desires/âpreferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isnât unreasonable).
I assume this is not how you weigh desires/âpreferences, though, or else you probably wouldnât disagree with RP here, and especially in the ways you do!
If you donât weigh desires by attention or their effects on attention, I donât see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still donât see a positive case for animals not mattering much.
Recently I failed to complete a dental procedure because I kept flinching whenever the dentist hit a particularly sensitive spot. They needed me to stay still. I promise you I would have preferred to stay still, not least because what ended up happening was I had to have it redone and endured more pain overall. My forebrain understood this, my hindbrain is dumb.
(FWIW the dentist was very understanding, and apologetic that the anesthetic didnât do its job. I did not get the impression that my failure was unusual given that.)
When I talk about suffering disrupting enjoyment of non-hedonic goods I mean something like that flinch; a forced âeliminate the pain!â response that likely made good sense back in the ancestral environment, but not a choice or preference in the usual sense of that term. This is particularly easy to see in cases like my flinch where the hindbrainâs âpreferenceâ is self-defeating, but I would make similar observations in some other cases, e.g. addiction.
I donât quite see what youâre driving at with this line of argument.
I can see how being able to firmly âgroundâ things is a nice/âhelpful property for an theory of âwhat is good?â to have. I like being able to quantify things too. But to imply that measuring good must be this way seems like a case of succumbing to the Streetlight Effect, or perhaps even the McNamara fallacy if you then downgrade other conceptions of good in the style of below quote.
Put another way, it seems like you prefer to weight by attention because it makes answers easier to find, but what if such answers are just difficult to find?
The fact that âwhat is good?â has been debated for literally millenia with no resolution in sight suggests to me that it just is difficult to find, in the same way that after some amount of time you should acknowledge your keys just arenât under the streetlight.
To avoid the above pitfall, which I think all STEM types should keep in mind, when I suspect my numbers are failing to capture the (morally) important things my default response is to revert in the direction of Common sense (morality). I think STEM people who fail to check themselves this way often end up causing serious harm[1]. In this case that would make me less inclined to trade human lives for aminal welfare, not more.
Iâll probably leave this post at this point unless I see a pressing need for further clarification of my views. I do appreciate you taking the time to engage politely.
SBF is the obvious example here, but really Iâve seen this so often in EA. Big fan of Warren Buffetâs quote here:
Itâs worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then thereâs top-down/âvoluntary/âendogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesnât.
I donât mean to discount preferences if interpersonal comparisons canât be grounded. I mean that if animals have such preferences, you canât say theyâre less important (thereâs no fact of the matter either way), as I said in my top-level comment.
Just to flag that Derek posted on this very recently. Itâs directly connected to both the present post and Michaelâs.
To the extent that we discuss this issue rarely it really ought to be worth someoneâs time to write up these supposed strong arguments. To the extent that they havenât, even after a well publicised week of discussion I will believe it more likely they donât exist.
@AGB đž would you be willing to provide brief sketches of some of these stronger arguments for global health which werenât covered during the Debate Week? Like Nathan, Iâve spent a ton of time discussing this issue with other EAs, and I havenât heard any arguments Iâd consider strong for prioritizing global health which werenât mentioned during Debate Week.
First, want to flag that what I said was at the post level and then defined stronger as:
You said:
So I can give examples of what I was referring to, but to be clear weâre talking somewhat at cross purposes here:
I would not expect you to consider them strong.
You are not alone here of course, and I suspect this fact also helps to answer Nathanâs confusion for why nobody wrote them up.
Even my post, which has been decently-received all things considered, I donât consider an actual good use of time, I more did it in order to sleep better at night.
They often were mentioned at comment-level.
With that in mind, I would say that the most common argument I hear from longtime EAs is variants of âanimals donât count at allâ. Sometimes itâs framed as âalmost certainly arenât sentientâ or âcount as ~nothing compared to a childâs lifeâ. You can see this from Jeff Kaufman, and Eliezer Yudkowsky, and itâs one I hear a decent amount from EAs closer to me as well.
If youâve discussed this a ton I assume you have heard this too, and just arenât thinking of the things people say here as strong arguments? Which is fine and all, Iâm not trying to argue from authority, at least not at this time. My intended observation was âlots of EAs think a thing that is highly relevant to the debate week, none of them wrote it up for the debate weekâ.
I think that observation holds, though if you still disagree Iâm curious why.
(Just wanted to say that your story of earning to give has been an inspiration! Your episode with 80k encouraged me to originally enter quant trading.)
Given your clarification, I agree that your observation holds. I too would have loved to hear someone defend the view that âanimals donât count at allâ. I think itâs somewhat common among rationalists, although the only well-known EA-adjacent individuals I know who hold it are Jeff, Yud, and Zvi Mowshowitz. Holden Karnofsky seems to have believed it once, but later changed his mind.[1]
As @JackM pointed out, Jeff didnât really justify his view in his comment thread. Iâve never read Zvi justify that view anywhere either. Iâve heard two main justifications for the view, either of which would be sufficient to prioritize global health:
Overwhelming Hierarchicalism
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals, or perhaps their preferences should be given an enormous multiplier.
I use the term âoverwhelmingâ because depending on which animal welfare BOTEC is used, if we use constant moral weights relative to humans, youâd need a 100x to 1000x multiplier for the math to work out in favor of global health. (This comparison is coherent to me because I accept Michael St. Julesâ argument that we should resolve the two envelopes problem by weighing in the units we know, but I acknowledge that this line of reasoning may not appeal to you if you donât endorse that resolution.)
I personally find overwhelming hierarchicalism (or any form of hierarchicalism) to be deeply dubious. I write more about it here, but I simply see it as a convenient way to avoid confronting ethical problems without having the backing of any sound theoretical justification. I put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans. Thereâs just no prior for why that would be the case.
Denial of animal consciousness
Yud and maybe some others seem to believe that animals are most likely not conscious. As before, theyâd have to be really certain that animals arenât conscious to endorse global health here. Even if thereâs a 10% chance that chickens are conscious, given the outsize cost-effectiveness of corporate campaigns if they are, I think theyâd still merit a significant fraction of EA funding. (Probably still more than theyâre currently receiving.)
I think itâs fair to start with a very strong prior that at least chickens and pigs are probably conscious. Pigs have brains with all of the same high-level substructures, which are affected the same way by drugs/âpainkillers/âsocial interaction as humansâ are, and act all of the same ways that humans act would when confronted with situations of suffering and terror. It would be really surprising a priori if what was going on was merely a simulacrum of suffering with no actual consciousness behind it. Indeed, the vast majority of people seem to agree that these animals are conscious and deserve at least some moral concern. I certainly remember being able to feel pain as a child, and I was probably less intelligent than an adult pig during some of that.
Apart from that purely intuitive prior, while Iâm not a consciousness expert at all, the New York Declaration on Animal Consciousness says that âthere is strong scientific support for attributions of conscious experience to other mammals and to birdsâ. Rethink Prioritiesâ and Luke Muehlhauserâs work for Open Phil corroborate that. So Yudâs view is also at odds with much of the scientific community and other EAs who have investigated this.
All of this is why I feel like Yudâs Facebook post needed a very high burden of proof to be convincing to me. Instead, it seems like he just kept explaining what his model (a higher-order theory of consciousness) believes without actually justifying his model. He also didnât admit any moral uncertainty about his model. He asserted some deeply unorthodox and unintuitive ideas (like pigs not being conscious), admitted no uncertainty, and didnât make any attempt to justify them. So I didnât find anything about his Facebook post convincing.
Conclusion
To me, the strongest reason to believe that animals donât count at all is because smart and well-meaning people like Jeff, Yud, and Zvi believe it. I havenât read anything remotely convincing that justifies that view on the merits. Thatâs why I didnât even mention these arguments in my follow-up post for Debate Week.
Trying to be charitable, I think the main reasons why nobody defended that view during Debate Week were:
They didnât have the mental bandwidth to be willing to deal with an audience I suspect would be hostile. Overwhelming hierarchicalism is very much against the spirit of radical empathy in EA.
They may have felt like most EAs donât share the basic intuitions underlying their views, so theyâd be talking to a wall. The idea that pigs arenât conscious might seem very intuitive to Eliezer. To me, and I suspect to most people, it seems wild. I could be convinced, but Iâd need to see way more justification than Iâve seen.
in 2017, Holdenâs personal reflections âindicate against the idea that e.g. chickens merit moral concernâ. In 2018, Holden stated that âthere is presently no evidence base that could reasonably justify high confidence [that] nonhuman animals are not âconsciousâ in a morally relevant wayâ.
Strong downvoted because I find this statement repugnant âI put about as much weight on it as the idea that the interests of the Aryan race should be lexicographically preferred to the interests of non-Aryans.â
Why go there? You donât do yourself or animal welfare proponents any favors. Make the argument in a less provocative way.
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
I have a lot of respect for most pro-animal arguments, but why go this way?
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
So, the argument youâre making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like itâs a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and itâs important to use them carefully to avoid alienating others.
(I didnât downvote your comment, by the way.)
I feel bad that my comment made you (and a few others, judging by your commentâs agreevotes) feel bad.
As JackM points out, that snarky comment wasnât addressing views which give very low moral weights to animals due to characteristics like mind complexity, brain size, and behavior, which can and should be incorporated into welfare ranges. Instead, it was specifically addressing overwhelming hierarchicalism, which is a view which assigns overwhelmingly lower moral weight based solely on species.
My statement was intended to draw a provocative analogy: Thereâs no theoretical reason why oneâs ethical system should lexicographically prefer one race/âgender/âspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like âwe have the right to exploit animals because weâre stronger than themâ, or âexploiting animals is the natural orderâ, which could have come straight out of Mein Kampf. Drawing a provocative analogy can (sometimes) force a person to grapple with the cognitive dissonance from holding such a position.
While hierarchicalism is common among the general public, highly engaged EAs generally donât even argue for hierarchicalism because itâs just such a dubious view. I wouldnât write something like this about virtually any other argument for prioritizing global health, including ripple effects, neuron count weighting, denying that animals are conscious, or concerns about optics.
âThereâs no theoretical reason why oneâs ethical system should lexicographically prefer one race/âgender/âspecies over another, based solely on that characteristic. In my experience, people who have this view on species say things like âwe have the right to exploit animals because weâre stronger than themâ, or âexploiting animals is the natural orderâ I completely agree with this (although I think its probably a straw man, I canât see anyone here arguing those things).
I just think its a really bad idea to compare almost most argument (including non-animal related ones) with Nazi Germany and that thought-world. I think its possible to provoke without going this way.
1) Insensitive to the people groups that were involved in that horrific period of time
2) Distracts the argument itself (like it has here, although thatâs kind of on me)
2) Brings potential unnecessary negative PR issues with EA, as it gives unnecessary ammunition for hit pieces.
Its the style not the substance here Iâm strongly against.
Iâm surprised this is the argument you went for. FWIW I think the strongest argument might be that global health wins due to ripple effects and is better for the long-term future (but I still think this argument fails).
On animals just ânot countingââIâve been very frustrated with both Jeff Kauffman and Eliezer Yudkowsky on this.
Jeff because he doesnât seem to have provided any justification (from what Iâve seen) for the claim that animals donât have relevant experiences that make them moral patients. He simply asserts this as his view. Itâs not even an argument, let alone a strong one.
Eliezer has at least defended his view in a Facebook thread which unfortunately I donât think exists anymore as all links I can find go to some weird page in another language. I just remember two things that really didnât impress me:
I wish I remembered this better, but he made some sort of assertion that animals donât have relevant experiences because they do not have a sense of self. Firstly, there is some experimental evidence that animals have a sense of self. But also, I remember David Pearce replying that there are occasions when humans lose a sense of self temporarily but can still have intense negative experiences in that moment e.g. extreme debilitating fright. This was a very interesting point that Eliezer never even replied to which seemed a bit suspect. (Apologies if Iâm misremembering anything here).
He didnât respond to the argument that, under uncertainty, we should give animals the benefit of the doubt so as to avoid potentially committing a grave moral catastrophe. This may not be relevant to the question of animal welfare vs global health, but it is relevant to, for example, the choice to go vegan. His dismissal of this argument also seemed a bit suspect to me.
I think there has been very little argument for animals not counting at the post level because, quite simply, the argument that (in expectation) they do count is just far stronger. See the new 80,000 Hours article on factory farming which does a nice job summarizing the key points.
I agree I havenât given an argument on this. At various times people have asked what my view is (ex: weâre taking here about something prompted by my completing a survey prompt) and Iâve given that.
Explaining why I have this view would be a big investment in time: I have a bundle of intuitions and thoughts that put me here, but converting that into a cleanly argued blog post would be a lot more work than I would normally do for fun and I donât expect this to be fun.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate, and since I think my views on animals are much less important than many of my other views, I wouldnât see this as at all a good thing. I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
The normal thing to do would be to stop here: Iâve said what my view is, and explained why Iâve never put the effort into a careful case for that position. But Iâm more committed to transparency than I am to the above, so Iâm going to take about 10 minutes (I have 14 minutes before my kids wake up) to very quickly sketch the main things going into my view. Please read this keeping in mind that it is something I am sharing to be helpful, and Iâm not claiming itâs fully argued.
The key question for me is whether, in a given system, thereâs anyone inside to experience anything.
I think extremely small collections of neurons (ex: nematodes) can receive pain, in the sense of updating on inputs to generate less of some output. But I draw a distinction between pain and suffering, where the latter requires experience. And I think itâs very unlikely nematodes experience anything.
I donât think this basic pleasure or pain matters, and if you canât make something extremely morally good by maximizing the number of happy neurons per cubic centimeter.
Iâm pretty sure that most adult humans do experience things, because I do and I can talk to other humans about this.
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
I donât find most things that people give as examples for animal consciousness to be very convincing, because you can often make quite a simple system that displays these features.
While some of my views above could imply that some humans are more valuable come up morally than others, I think it would be extremely destructive to act that way. Lots and lots of bad history there. I treat all people as morally equal.
The arguments for extending this to people as a class donât seem to me to justify extending this to all creatures as a class.
I also think there are things that matter beyond experienced joy and suffering (preference satisfaction, etc), and Iâm even less convinced that animals have these.
Eliezerâs view is reasonably close to mine, in places where Iâve seen him argue it.
(Iâm not going to be engaging with object level arguments on this issueâIâm not trying to become an anti-animal advocate.)
Thanks for your response.
Iâd be interested to know how likely you think it is that you could do a âgood jobâ. You say you have a âbundle of intuitions and thoughtsâ which doesnât seem like much to me.
Iâm also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a âbundle of intuitions and thoughtsâ on what is ultimately a very difficult and important question.[1] In your original comment you say âThis isnât as deeply a considered view as Iâd likeâ. Were you saying you havenât considered deeply enough or that the general community hasnât?
And thanks for the sketch of your reasoning but ultimately I donât think itâs very helpful without some justification for claims like the following:
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/âmoral patienthood are pretty strong (e.g. see here for a summary) and I would not say Iâm relying on intuition. Iâm not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals arenât moral patients, I think you probably need to be very confident of this to counteract the vast numbers of animals that can be helped.
I do think I could do a good job, yes. While Iâve been thinking about these problems off and on for over a decade Iâve never dedicated actual serious time here, and in the past when Iâve put that kind of time into work Iâve been proud of what Iâve been able to do.
What I meant by that is that I donât have my overall views organized into a form optimized for explaining to others. Iâm not asking other people to assume that because Iâve inscrutably come to this conclusion Iâm correct or that they should defer to me in any way. But Iâd also be dishonest if I didnât accurately report my views.
Primarily the former. While if someone in the general community had put a lot of time into looking at this question from a perspective similar to my own and I felt like their work addressed my questions that would certainly help, given that no one has and Iâm instead forming my own view I would prefer to have put more work into that view.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your viewsâsorry I donât really buy it. For example, I donât think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course youâre not under any obligation to write anything (well...perhaps some would argue you are, but Iâll concede youâre not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know Iâd write it up.
Ah, thank you for clarifying! That is a much stronger sense of âdoing a good jobâ than I was going for. I was trying to point at something like, successfully writing up my views in a way that felt like a solid contribution to the discourse. Explaining what I thought, why I thought it, and why I didnât find the standard counter arguments convincing. I think this would probably take me about two months of full-time work, so a pretty substantial opportunity cost.
I think I could do this well enough to become the main person people pointed at when they wanted to give an example of a âdonât value animalsâ EA (which would probably be negative for my other work), but even major success here would probably only result in convincing <5% of animal-focused EAs to change what they were working on. And much less than that for money, since most of the EA money is from OP, which funds animal work as part of an explicit process of worldview diversification.
I would be primarily known as an anti-animal advocate if I wrote something like this, even if I didnât want to be.
On whether I would need to put my time into continuing to defend the position, I agree that I strictly wouldnât have to, but I think that given my temperament and interaction style I wouldnât actually be able to avoid this. So I need to think of this as if I am allocating a larger amount of time than what it would take to write up the argument.
I donât think this is what Jeff said.
OK so he says he would primarily be âknownâ as an anti-animal advocate not âbecomeâ one.
But he then also says the following (bold emphasis mine):
Iâm struggling to see how what I said isnât accurate. Maybe Jeff should have said âI would feel compelled toâ rather than âI would need toâ.
To my eyes âbe known as an anti-animal advocateâ is a much lower bar than âbe an anti-animal advocate.â
For example I think some people will (still!) consider me an âanti-climate change advocateâ (or âanti-anti-climate change advocate?â) due to a fairly short post I wrote 5+ years ago. I would, from their perspective, take actions consistent with that view (eg Iâd be willing to defend my position if challenged, describe ways in which Iâve updated, etc). Moreover, it is not implausible that from their perspective, this is the most important thing I do (since they donât interact with me at other times, and/âor they might think my other actions are useless in either direction).
However, by my lights (and I expect by the lights of e.g. the median EA Forum reader) this would be a bad characterization. I donât view arguing against climate change interventions as an important aspect of my life, nor do I believe my views on the matter as particularly outside of academic consensus.
Hence the distinction between âknown asâ vs âbecome.â
You seem to have ignored the bit I made in bold in my previous comment
I donât think there is or ought to be an expectation to respond to every subpart of a comment in a reply
Itâs the only part of my comment that argues Jeff was effectively saying he would have to âbeâ an animal advocate, which is exactly what youâre arguing against.
So I guess my best reply is just to point you back to that...
Oh well, was nice chatting.
I guess I still donât think of âI would need to spend a lot of time as a representative of this positionâ as being an anti-animal advocate. I spend a lot of time disagreeing with people on many different issues and yet Iâd consider myself an advocate for only a tiny minority of them.
Put another way, I view the time spent as just one of the costs of being known as an anti-animal advocate, rather than being one.
What do you think of the following evidence?
Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[1] In other words, it seems like they can be taught to tell us what theyâre feeling in ways unnatural and non-instinctive to them. To me, the difference between this and human language is mostly just a matter of degree, i.e. we form more associations and form them more easily, and we do recursion.
Graziano (2020, pdf), an illusionist and the inventor of Attention Schema Theory, also takes endogenous/âtop-down/âvoluntary attention control to be evidence of having a model (schema) of oneâs own attention.[2] Then, according to Nieder (2022), there is good evidence for the voluntary/âtop-down control of attention (and working memory) at least across mammals and birds, and some suggestive evidence for it in some fish.
And I would expect these to happen in fairly preserved neural structures across mammals, at least, including humans.
I also discuss desires and preferences in other animals more here and here.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in SĂĄnchez-SuĂĄrez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3.
Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that itâs the anxiety itself that theyâre discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants.
Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, âjet lagâ, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
However, Mason and Lavery (2022) caution:
I would expect that checking which brain systems are involved and what their typical functions are could provide further evidence. The case for other mammals would be strongest, given more preserved functions across them, including humans.
Graziano (2020, pdf):
Thanks for taking the time to expose your view clearly here, and explaining why you do not spend a lot of time on the topic (which I respect).
If I understand correctly, the difference in consideration you make between humans and animals seems to boil down to âI can talk to humans, and they can tell me that they have an inner experience, while animals cannot (same for small children)â.
While nobody disputes that, I find it weird that your conclusion is not âIâm very uncertain about other systemsâ, but âother systems that cannot tell me directly about their inner experience (very small children, animals) probably donât have any relevant inner experienceâ. Iâm not sure how you got to that conclusion. At the very least, this would justify extreme uncertainty.
Personally, I think that the fact that animals display a lot of behaviour similar to humans in similar situations should be a significant update toward thinking they have some kind of experience. For instance, a pig is screaming and trying to escape when it is castrated, just as humans would do (we have to observe behaviours).
We can probably build robots that can do the same thing, but that just means weâre good at mimicking other life forms (for instance, we can also build LLMs which tell us they are conscious, and we donât use that to think humans are not sentient).
I donât think this is what Jeff believes, though I guess his literal words are consistent with this interpretation.
The discussion is archived here and the original Facebook post here.
It was also discussed again three years ago here.
Thank you! Links in articles such as this just werenât working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
Iâm confused how this works, could you elaborate?
My usual causal chain linking these would be âargument is weakâ â â~nobody believes itâ â ânobody posts itâ.
The middle step fails here. Do you have something else in mind?
FWIW, I thought these two comments were reasonable guesses at what may be going on here.
Iâm not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
âargument is weak but some people intuitively believe it in part because they want it to be trueâ â âthere is no strong post that can really be writtenâ â ânobody posts itâ
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
Ah, gotcha, I guess that works. No, I donât have anything I would consider strong evidence, I just know itâs come up more than anything else in my few dozen conversations over the years. I suppose I assumed it was coming up for others as well.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isnât a lot of money in the AW space. Iâm pretty sure GHD has far better places to fundraise from.
To the extent I have spoken to people (not Jeff, and not that much) about why they donât engage more on this, I thought the two comments I linked to in my last comment had a lot of overlap with the responses.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, Iâd imagine some people would make it their overwhelming mission to ensure we donât (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesnât seem too bad to me. Iâm not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because thereâs just no point doesnât stack up to me.
For the record, I have a few places I think EA is burning >$30m per year, not that AW is actually one of them. Most EAs I speak to seem to have similarly-sized bugbears? Though unsurprisingly they donât agree about where the money is getting burned..
So from where I stand I donât recognise your guess of how people respond to that situation. A few things I believe that might help explain the difference:
Most of the money is directed by people who donât read or otherwise have a fairly low opinion of the forum.
Posting on the forum is ânot for the faint of heartâ.
On the occasion that I have dug into past forum prioritisation posts that were well-received, I generally find them seriously flawed or otherwise uncompelling. I have no particular reason to be sad about (1).
People are often aware that thereâs an âother sideâ that strongly disagrees with their disagreement and will push back hard, so they correctly choose not to waste our collective resources in a mud-slinging match.
I donât expect to have capacity to engage further here, but if further discussion suggests that one of the above is a particularly surprising claim, I may consider writing it up in more detail in future.
Maybe I donât speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isnât optimal, but I wasnât aware that many EAs think we are giving tens of millions of dollars to interventions/âareas that do NO good in expectation (which is what I mean by âburning moneyâ).
Maybe the burning money point is a bit of a red herring though if the amount youâre burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who donât think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. Iâd love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Arielâs on the topic of animal welfare vs global health.
As a small note, I donâ think the âbelieve it because they want it to be trueâ is really an argument either way. To state the obvious, animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
So I donât think the âwant it to be trueâ argument stands really at all. Motivations are very strong on both sides, and from a ârealpolitikâ kind of perspective, thereâs so much more riding on this from animal researchers than there is for people like Yud and Zvi.
On the other hand, the âvery few people believe animals arenât moral patients and havenât made great arguments for itâ point for me stands very strong.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get âfuzziesâ from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldnât as a human species then be committing a grave moral atrocity which would be a massive relief.
There arenât really similar arguments for wanting animals to be moral patients (other than âI work on animal welfareâ) but I would be interested if Iâm missing any relevant ones.
I agree with the other comments that the case against prioritising animal welfare is quite weak in this post.
If I understand the two envelope problems correctly, it says that it could be used to justify switching funds from global health to animal welfareâbut it also could justify switching funds currently allocated to animal welfare towards global health.
Anyway, I think the post lacks actual arguments about why animal welfare should not be prioritised. Preferences do not tell us much, as stated by Michael StJules, since animals also have preferences (they run away when they are hurt).
The toy examples present situations where itâs equally likely that animal welfare is better or worse than global health (50% chance hedonism is true, 25% chance itâs 1000x times more/âless effective).
But this is a strong assumption that severely lacks justification, in my opinion. Why would animals have a much lower moral weight than humans? This is the argument that needs to be addressed.
I agree-voted this. This post was much more âThis argument in favour of X doesnât work[1]â rather than âX is wrongâ, and I wouldnât want anyone to think otherwise.
Or more precisely, doesnât work without more background assumptions.
Oh, ok. Itâs just that the first sentence and examples gave a slightly different vibe, but itâs more clear now.
You preface this post as being an argument for Global Health, but it isnât necessarily. As you say in the conclusion it is a call not to âfocus on upside cases and discard downside ones as the Multiplier Argument pushes you to doâ. For you this works in favor of global health, for others it may not. Anyone who think along the lines of âhow can anyone even consider funding animal welfare over global health, animals cannot be fungible with humans!â, or similar, will have this argument pull them closer to the animal welfare camp.
I take on board this is just a toy example, but I wonder how relevant it is. For starters I have a feeling that many in the EA community place higher credence on moral theories that would lead to prioritizing animal welfare (most prominent of which is hedonism). I think this is primarily what drove animal welfare clearly beating global health in the voting. So the â50/â50â in the toy example might be a bit misleading, but I would be interested in polling the EA community to understand their moral views.
You can counter this and say that people still arenât factoring in that global health destroys animal welfare on pretty much any other moral view, but is this true? This needs more justification as per MichaelStJulesâ comment.
Even if it is true, is it fair to say that non-hedonic moral theories favor global health over animal welfare to a greater extent than hedonism favors animal welfare over global health? That claim is essentially doing all the work in your toy example, but seems highly uncertain/âquestionable.
In theory I of course agree this can go either way; the maths doesnât care which base you use.
In practice, Animal Welfare interventions get evaluated with a Global Health base far more than vice-versa; see the rest of Debate Week. So I expect my primary conclusion/âTL;DR[1] to mostly push one way, and didnât want to pretend that I was being âneutralâ here.
Ah, interesting that you think many people put >50% on hedonism and similarly-animal-friendly theories. 50% was intended to be generous; the last animal-welfare-friendly person I asked about this was 20-40% IIRC. Pretty sure I am even lower. So yes Iâd also be interested in polling here, more of wider groups (population? philosophers?) than of EA but Iâd take either.
Copying to save people searching for it:
Multiplier Arguments are incredibly biased in favour of switching, and they get more biased the more uncertainty you have. Used naively in cases of high uncertainty, they will overwhelmingly suggest you switch intervention from whatever you use as your base.
Iâm not sure what the scope of âsimilarly-animal-friendly theoriesâ is in your mind. For me I suppose itâs most if not all consequentialist /â aggregative theories that arenât just blatantly speciesist. The key point is that the number of animals suffering (and that we can help) completely dwarfs the number of humans. Also, as MichaelStJules says, Iâm pretty sure animals have desires and preferences that are being significantly obstructed by the conditions humans impose on them.
I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories. You seem to think that itâs because they are making this simple mistake with the multiplier argument, with your evidence being that loads of people are citing the RP moral weights project. I suppose Iâm not sure which of us is correct, but I would point out that people may just find the moral weights project important because they have some significant credence in hedonism.
<<I took the fact that the forum overwhelmingly voted for animal welfare over global health to mean that people generally favor animal-friendly moral theories.>>
I think âgenerally favorâ is a touch too strong hereâone could discount them quite significantly and still vote for animal welfare on the margin because the funding is so imbalanced and AW is at a point where the funding is much more leveraged than ~paying for bednets.
Yep completely with Jason here. I voted a smidge in favor of giving the 100 million to animal rights orgs yet Iâm pretty sure youâd consider me to have very human-friendly moral theories
To push that thinking a bit further compared with the general public, EAs have extremely animal friendly theories. For example I would easily be in the top 1 percent of animal-friendly-moral theory humans (maybe top 0.1 percent) but maybe in the bottom third of EAs?
That is a datapoint as much as many might mostly discount it.
What is your preferred moral theory out of interest?
When you say top 1 percent of animal-friendly-moral theory humans but maybe in the bottom third of EAs, is this just say hedonism but with moral weights that are far less animal-friendly than say RPâs?
Thanks Jack, I donât have a clear answer to that right now. I have a messy mix of moral theories in which hedonism would contribute.
Iâm so uncertain about the moral weights of animals right now (and more so after debate week, but updated a bit in favor of animals) and I value certainty quite a lot. I have quite a low threshold for feeling like Pascal is mugging me ;).
Again I think it depends on what we mean by an animal-friendly moral theory or a pro-global health moral theory. Iâd be surprised though if many people hold a pro-global health moral theory but still favor animal welfare over global health. But maybe Iâm wrong.
Iâll leave this thread here, except to clarify that what you say I âseem to thinkâ is a far stronger claim than I intended to make or in fact believe.
Sorry that is fair, I think I assumed too much about your views.
One thing to be careful of re: question framing is to make sure to constrain the set of theories under consideration to altruism-relevant theories. Eg many people will place nontrivial credence in nihilism, egotism, commonsense morality, but most of those theories will not be particularly relevant to the prioritization for altruistic allocation of marginal donations.
Sorry to distract from the object level a bit, but I had a reaction to the parts I quoted above as feeling pretty unfriendly and indirectly disparaging to the things other people have written on the forum.
I realise that you said (to paraphrase) âthere are many strong arguments that were not raisedâ, and not âthe arguments that were raised were not strongâ. Maybe you meant that there had been good arguments already, but more were missing. (Maybe you meant not enough had been posted about GH at all.) But I donât think itâs too surprising that I felt the second thing in the air, even if you didnât say it, and I imagine that if I had written a pro-GH argument in the last week, I might feel kind of attacked.
Yeah I think thereâs something to this, and I did redraft this particular point a few times as I was writing it for reasons in this vicinity. I was reluctant to remove it entirely, but it was close and I wonât be surprised if I feel like it was the wrong call in hindsight. Itâs the type of thing I expect I would have found a kinder framing for given more time.
Having failed to find a kinder framing, one reason I went ahead anyway is that I mostly expect the other post-level pro-GH people to feel similarly.
I agree with @AGB đž. I think there was only one seriously pro GH article from @Henry Howardđž (which I really appreciated), and a couple of very moderate push backs that could hardly be called strong arguments for GH (including mine). On the other hand there were almost 10 very pro animal-welfare articles.
I actually argue in that post that it shouldnât be fixed across all sets of assumptions. However, the main point is that our units should be human-based under every set of assumptions, because we understand and value things in reference to our own (human) experiences. The human-based units can differ between sets of assumptions.
So, for example, you could have a hedonic theory, with a human-based hedonic unit, and a desire theory, with a human-based desire unit.[1] These two human-based units may not be intertheoretically comparable, so you could end up with a two envelopes problem between them.
The end result might be that the value of B relative to A doesnât differ too much across sets of assumptions, so it would look like we can fix the value of A, but Iâm not confident that this is actually the case. Iâm more inclined to say something like âB beats A by at least X times across most views I entertain, by credenceâ. I illustrated how to bound the ratios of expected values with respect to one another and how animals could matter a lot this way in this section.
Or, say, multiple hedonic theories, each with its own human-based hedonic unit.
Hi Michael, just quickly: Iâm sorry if I misinterpreted your post. For concreteness, the specific claim I was noting was:
In particular, the bolded section seems straightforwardly false for me, and I donât believe itâs something you argued for directly?
Could you elaborate on this? I might have worded things poorly. To rephrase and add a bit more, I meant something like
(These personal reference point experiences can also be empathetic responses to others, which might complicate things.)
The section the summary bullet point you quoted links to is devoted to arguing for that claim.
Anticipating and responding to some potential sources of misunderstanding:
I didnât intend to claim weâre all experientialists and so only care about the contents of experiences, rather than, say, how our desires relate to the actual states of the world. The arguments donât depend on experientialism.
I mostly illustrated the arguments with suffering, which may give/âreinforce the impression that Iâm saying our understanding of value is based on hedonic states only, but I didnât intend that.
I can try, but honestly I donât know where to start; Iâm well-aware that Iâm out of my depth philosophically, and this section just doesnât chime with my own experience at all. I sense a lot of inferential distance here.
Trying anyway: That section felt closer to empirical claim that âweâ already do things a certain way than an argument for why we should do things that way, and I donât seem to be part of the âweâ. I can pull out some specific quotes that anti-resonate and try to explain why, with the caveat that these explanations are much closer to âwhy I donât buy thisâ than âwhy I think youâre wrongâ.
***
I am most sympathetic to this if I read it as a cynical take on human morality, i.e. I suspect this is more true than I sometimes care to admit. I donât think youâre aiming for that? Regardless, itâs not how I try to do ethics. I at least try to have my mind change when relevant facts change.
An example issue is that memory is fallible; you say that I have directly experienced human suffering, but for anything I am not experiencing right this second all I can access is the memory of it. I have seen firsthand that memory often edits experiences after the fact to make them substantially more or less severe than they seemed at the time. So if strong evidence showed me that somthing I remember as very painful was actually painless, the âstrength of my reasonâ to reduce that suffering would fall[1].
You use some other examples to illustrate how the empirical nature does not matter, such as discovering seratonin is not what we think it is. I agree with that specific case. I think the difference is that your example of an empirical discovery doesnât really say anything about the experience, while mine above does?
Knowing what is going on during an experience seems like a major contributor to how I relate to that experience, e.g. I care about how long itâs going to last. Looking outward for whether others feel similarly, It Gets Better and the phrase âlight at the end of the tunnelâ come to mind.
You could try to fold this in and say that the pain of the dental drill is itself less bad because I know itâll only last a few seconds, or conversely that (incorrectly) believing a short-lived pain will last a long time makes the pain itself greater, but that type of modification seems very artificial to me and is not how I typically understand the words âpainâ and âsufferingâ.
...But to use this as another example of how I might respond to new evidence: if you showed me that the brain does in fact respond less strongly to a painful stimulus when the person has been told itâll be short, that could make me much more comfortable describing it as less painful in the ordinary sense.
There are other knowledge-based factors that feel like they directly alter my âscoringâ of painâs importance as well, e.g. a sense of whether itâs for worthwhile reasons.
Iâm with your footnote here; it seems entirely conceivable to me that my own suffering does not matter, so trying to build ratios with it as the base has the same infinity issue, as you say:
Per my OP, I roughly think you have to work with differences not ratios.
***
Overall, I was left with a sense from below quote and the overall piece that you perceive your direct experience as a way to ground your values, a clear beacon telling you what matters, and then we just need to pick up a torch and shine a light into other areas and see how much more of what matters is out there. For me, everything is much more âfog of warâ, very much including my own experiences, values and value. Soâand this may be unfairâI feel like youâre asking me âwhy isnât this clear to you?â and Iâm like âI donât know what to tell you, it just doesnât look that simple from where Iâm sittingâ.
Though perhaps not quite to zero; it seems I would need to think about how much of the total suffering is the memory of suffering.
Thanks, this is helpful!
I think what I had in mind was more like the neuroscience and theories of pain in general terms, or in typical cases â hence âtypicallyâ â, not very specific cases. So, Iâd allow exceptions.
Your understanding of the general neuroscience of pain will usually not affect how bad your pain feels to you (especially when youâre feeling it). Similarly, your understanding of the general neuroscience of desire wonât usually affect how strong (most of) your desires are. (Some people might comfort themselves with this knowledge sometimes, though.)
This is what I need, when we think about looking for experiences like ours in other animals.
On your specific cases below.
The fallible pain memory case could be an exception. I suspect thereâs also an interpretation compatible with my view without making it an exception: your reasons to prevent a pain that would be like you remember the actual pain you had (or didnât have) are just as strong, but the actual pain you had was not like you remember it, so your reasons to prevent it (or a similar actual pain) are not in fact as strong.
In other words, you are valuing your impression of your past pain, or, say, valuing your past pain through your impression of it.[1] That impression can fail to properly track your past pain experience.[2] But, holding your impression fixed, if your past pain or another pain were like your impression, then there wouldnât be a problem.
And knowing how long a pain will last probably often does affect how bad/âintense the overall experience (including possible stress/âfear/âanxiety) seems to you in the moment. And either way, how you value the pain, even non-hedonically, can depend on the rest of your impression of things, and as you suggest, contextual factors like âwhether itâs for worthwhile reasonsâ. This is all part of the experience.
The valuing itself is also part of the impression as a whole, but your valuing is applied to or a response to parts of the impression.
Really, ~all memories of experiences will be at least somewhat off, and theyâre probably systematically off in specific ways. How you value pain while in pain and as you remember it will not match.
I thought that post used the âequality resultâ as a hypothetical and didnât claim it was correct.
When first introduced:
At the end of the post:
I think the right post to reference readers to is probably this one where chicken experiences are 1â3 of humansâ. (Which isnât too far off from 1x, so I donât think this undermines your post.)