Note that the logarithm of a positive weighted geometric mean is the weighted arithmetic mean of the logarithms:
So, instead of switching to the weighted goemetric mean, you could just take the logarithm of your factors.
Note that the logarithm of a positive weighted geometric mean is the weighted arithmetic mean of the logarithms:
So, instead of switching to the weighted goemetric mean, you could just take the logarithm of your factors.
FWIW, when I have a weighted factor model to build, I think about how I can turn it into a BOTEC, and try to get it close(r) to a BOTEC. I did this for my career comparison and a geographic weighted factor model.
My understanding from conversation with SWP is that for shrimp, the electric stunning also just kills the shrimp, and it’s all over very quickly.
It might be different for fish.
This is less clear for shrimp, though. I don’t know if they find the cold painful at all, and it might sedate them or even render them unconscious. But I imagine that takes time, and they’re being crushed by each other and ice with ice slurry.
Also this thread (and maybe especially my response) may be useful.
Ok, this makes more sense.
What do you count as “other individual”? Any physical system, including overlapping ones? What about your brain, and your brain but not counting one electron?
I don’t think that gives you can actual proper quantitative prior, as a probability distribution.
I don’t think you should give 0 probability to individual cells being conscious, because then no evidence or argument could move you away from that, if you’re a committed Bayesian. I don’t know what an uninformed prior could look like. I imagine there isn’t one. It’s the reference class problem.
You should even be uncertain about the fundamental nature of reality. Maybe things more basic than fundamental particles, like strings. Or maybe something else. They could be conscious or not, and they may not exist at all.
I’m sympathetic to gradualism.
I’m also sympathetic to the view that no number of toe stubs aggregate to outweigh a lifetime of torture (maybe unless the toe stubs together feel like intense torture).
People could still pay for animal products from animals that are more traditionally/conventionally raised, without new technologies, for political, religious or other identity reasons. This could be most (social) conservatives. And AI could make the world’s poor (social) conservatives much wealthier and able to afford far more animal products.
The cost to avert the suffering would be trivial, but so too would be the cost of conventional animal products, if and because everyone is rich. People could also refuse payment or other incentives to avoid conventional animal products.
And animals could continue to be bred for productivity at the cost of welfare, like has been done for broilers (although this trend for broilers is reversing in the West, because of animal advocacy from our extended community). Genetic engineering could also happen, but people averse to cultured and plant-based products might be averse to that, too, anyway. Some may be selectively averse, though.
I’d guess slaughter would be more humane, with better stunning, and maybe anaesthesia/painkillers would actually be used widely for painful mutilation procedures, and/or we’d just mutilate less. I’d guess people mostly wouldn’t oppose those, although many Muslims do oppose stunning for slaughter for religious reasons.
And high welfare farming can be done in ways acceptable to conservatives, e.g. fairly natural, natural breeds, mostly outdoors, or with lots of space and enrichment. However, they may oppose transition towards that for political reasons. They tend to vote against farmed animal welfare reforms, probably not just for cost reasons, but also for political reasons.
It’s worth distinguishing different attentional mechanisms, like motivational salience from stimulus-driven attention. The flinch might be stimulus-driven. Being unable to stop thinking about something, like being madly in love or grieving, is motivational salience. And then there’s top-down/voluntary/endogenous attention, the executive function you use to intentionally focus on things.
We could pick any of these and measure their effects on attention. Motivational salience and top-down attention seem morally relevant, but stimulus-driven attention doesn’t.
I don’t mean to discount preferences if interpersonal comparisons can’t be grounded. I mean that if animals have such preferences, you can’t say they’re less important (there’s no fact of the matter either way), as I said in my top-level comment.
What do yo think of the following evidence?
Rats and pigs seem to be able to discriminate anxiety from its absence generalizably across causes with a learned behaviour, like pressing a lever when they would apparently feel anxious.[1] In other words, it seems like they can be taught to tell us what they’re feeling in ways unnatural and non-instinctive to them. To me, the difference between this and human language is mostly just a matter of degree, i.e. we form more associations and form them more easily, and we do recursion.
Graziano (2020, pdf), an illusionist and the inventor of Attention Schema Theory, also takes endogenous/top-down/voluntary attention control to be evidence of having a model (schema) of one’s own attention.[2] Then, according to Nieder (2022), there is good evidence for the voluntary/top-down control of attention (and working memory) at least across mammals and birds, and some suggestive evidence for it in some fish.
And I would expect these to happen in fairly preserved neural structures across mammals, at least, including humans.
I also discuss desires and preferences in other animals more here and here.
Carey and Fry (1995) showed that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Many more such experiments were performed on rats, as discussed in Sánchez-Suárez, 2016, summarized in Table 2 on pages 63 and 64 and discussed further across chapter 3.
Rats could discriminate between the injection of the anxiety-inducing drug PTZ and saline injection, including at subconvulsive doses. Various experiments with rats and PTZ have effectively ruled out convulsions as the discriminant, further supporting that it’s the anxiety itself that they’re discriminating, because they could discriminate PTZ from control without generalizing between PTZ and non-anxiogenic drugs, and with the discrimination blocked by anxiolytics and not nonanxiolytic anticonvulsants.
Rats further generalized between various pairs of anxiety(-like) states, like those induced by PTZ, drug withdrawal, predator exposure, ethanol hangover, “jet lag”, defeat by a rival male, high doses of stimulants like bemegride and cocaine, and movement restraint.
However, Mason and Lavery (2022) caution:
But could such results merely reflect a “blindsight-like” guessing: a mere discrimination response that need not reflect underlying awareness? After all, as we have seen for S.P.U.D. subjects, decerebrated pigeons can use colored lights as DSs (128), and humans can use subliminal visual stimuli as DSs [e.g., (121)]. We think several refinements could reduce this risk.
I would expect that checking which brain systems are involved and what their typical functions are could provide further evidence. The case for other mammals would be strongest, given more preserved functions across them, including humans.
Any creature that can endogenously direct attention must have some kind of attention schema, and good control of attention has been demonstrated in a range of animals including mammals and birds (e.g., Desimone & Duncan, 1995; Knudsen, 2018; Moore & Zirnsak, 2017). My guess is that most mammals and birds have some version of an attention schema that serves an essentially similar function, and contains some of the same information, as ours does. Just as other animals must have a body schema or be condemned to a flailing uncontrolled body, they must have an attention schema or be condemned to an attention system that is purely at the mercy of every new sparkling, bottom-up pull on attention. To control attention endogenously implies an effective controller, which implies a control model.
Then they will go have experiences, and regardless of what they experience, if they then choose to “pin” the EV-calculation to their own experience, the EV of switching to benefitting non-humans will be positive. So they’ll pay 2 pennies to switch back again. So they 100% predictably lost a penny. This is irrational.
You’re assuming they will definitely have a human experience (e.g. because they are human) and so switch to benefitting non-humans. If you’re assuming that, but not allowing them to assume that themselves, then they’re being exploited through asymmetric information or their priors not matching the situation at hand, not necessarily irrationality.
If they assume they’re human, then they can pin to what they’d expect to experience and believe as a human (even if they haven’t experienced it yet themself), and then they’d just prioritize non-humans from the start and never switch.
But you can instead assume it’s actually 50-50 whether you end up as a human or an alien, and you have these two options:
at an extra cost of 1 penny, get the human experience, or get the alien experience, 50% probability each, pin to it, and help the other beings.
at no extra cost, flip a coin, with heads for helping humans and tails for helping aliens, and then commit to following through on that, regardless of whether you end up having human experience or alien experience.
I think there’s a question of which is actually better. Does 2 stochastically dominate 1? You find something out in 1, and then help the beings you will come to believe it’s best to help (although this doesn’t seem like a proper Bayesian update from a prior). In 2, if you end up pinning to your own experience, you’ll regret prioritizing humans if your experience is human, and you’ll regret prioritizing aliens if your experience is alien.
See also this comment and this thread.
I wouldn’t say I’m confident either way.
Subjective well-being studies are usually not assessing hedonic well-being, but life satisfaction. People can be satisfied with their lives because they have things important to them that are going well (family, friends, other goals), or by comparing their lives to others’ around them, and these can be more important to them than their own average hedonic wellbeing when they judge their own lives.
If you have particular studies in mind that get at hedonic well-being (“affect” in the literature, sometimes via experience sampling) specifically, I’d be interested in them, though. I haven’t really looked into this myself. I’m just doing the accounting intuitively by imagining how people spend their time. And a lot of that is work (including housework, cooking), and probably more so for poor people in low-income countries.
(FWIW, I’m not a hedonist.)
Also, for comparisons to global health in particular, we should be thinking about what a life in full health for the potential beneficiaries of the relevant charities would be like. They still live in poverty and spend a lot of time working, which are sources of frustration, stress and discomfort. It wouldn’t surprise me to find out their lives are (mildly) net negative hedonically, even if they prefer to live on the whole and judge their lives as positive.
Saving their lives could still be good under hedonism even if their lives turn out to be net negative hedonically, if and because it increases the hedonic welfare of others enough. Losing a child is traumatic and horrible. And there are economic benefits to saving lives, which should reduce the stressors of poverty.
I mostly agree with your reasoning before even getting into moral uncertainty and up to and including this:
After the two issues I am willing to quantify we’re down to around 3.3x, and we’re still assuming hedonism.
However, if we’re assuming hedonism, I think your starting point is plausibly too low for animal welfare interventions, because it underestimates the disvalue of pain relative to life in full health, as I argue here.
I also think your response to the Tortured Tim thought experiment is reasonable. Still, I would say:
If you weigh desires/preferences by attention or their effects on attention (e.g. motivational salience), then the fact that intense suffering is so disruptive and would take priority over attention to other things in your life means it would matter a lot, supporting RP’s take. And if you weigh desires/preferences by attention or their effects on attention, it seems nonhuman animals matter a lot (but something like neuron count weighting isn’t unreasonable).
I assume this is not how you weigh desires/preferences, though, or else you probably wouldn’t disagree with RP here, and especially in the ways you do!
If you don’t weigh desires by attention or their effects on attention, I don’t see how you can ground interpersonal utility comparisons at all, especially between humans and other animals but even between humans, who may differ dramatically in their values. I still don’t see a positive case for animals not mattering much.
FWIW, I suspect RP’s DALY conversions are too low for the badness of pain assuming hedonism.
Here’s what RP wrote about the weights (from here, extending what you quoted in your post):
I used the following conversion factors to translate the duration of each of the four types of pain into DALY equivalents:
1 year of annoying pain = 0.01 to 0.02 DALYs
1 year of hurtful pain = 0.1 to 0.25 DALYs
1 year of disabling pain = 2 to 10 DALYs
1 year of excruciating pain = 60 to 150 DALYs
I arrived at these intensity-to-DALY conversions by looking at the descriptions of and disability weights assigned to various conditions assessed by the Global Burden of Disease Study in 2019 and comparing these to the descriptions of each type of pain tracked by the Welfare Footprint Project.
Reasoning about the weights directly: DALYs are normalized to reflect the life of a typical person in perfect health (DALY weight 0). Such a life still contains some suffering, frustration, boredom, and I’d guess the joys only reach an intensity similar to disabling pain for brief periods at a time (e.g. laughter) or basically never at all. As a result, I’d put 1 year of hurtful pain close to 1 DALY at a minimum, and possibly higher. And disabling pain seems at least 5x as bad/intense as hurtful pain to me, so 1 year of disabling pain should be at least around 5 DALYs.
I’m personally more sympathetic to disabling pain being ~50x more intense than hurtful pain (or higher), which would give something like 50 DALYs per year of disabling pain.
The report also doesn’t explain the exact process for getting these numbers, but some potential sources of bias worth flagging:
DALY weights in the literature do not (I’d guess) reflect being in such pain every (waking) hour of your life in a year, but 1 year of X pain does, by assumption.
DALY weights in the literature don’t assume hedonism at all, and probably reflect what’s at stake non-hedonically for someone if they were to die early or other things they care about, because the responses used to estimate them come from people who are not usually hedonists. They therefore overestimate the hedonic value of life in full health.
These could both lead to underestimating the badness of pain.
The report might have accounted for these, but I can’t tell.
The humans and aliens have (at least slightly) different concepts for the things they’re valuing, each being informed by and partly based on their own direct experiences, which differ. So they can disagree on the basis of caring about different things and having different views.
This is like one being hedonistic utilitarian and the other being preference utilitarian. They’re placing value on different concepts. It’s not problematic for them to disagree.
I might have higher probability thresholds for what I consider Pascalian, but it’s also a matter of how much of my time and resources I have to give. This feels directly intuitive to me, and it can be cashed out in terms of normative uncertainty about decision theory/my risk appetite. I limit my budget for views that are more risk neutral.
Voting is low commitment, so not Pascalian this way. Devoting your career to nuclear policy seems Pascalian to me. Working on nuclear policy among many other things you work on doesn’t seem Pascalian. Fighting in a war only with deciding the main outcome of who wins/loses in mind seems Pascalian.
Some of these things may have other benefits regardless of whether you change the main binary-ish outcome you might have in mind. That can make them not Pascalian.
Also, people do these things without thinking much or at all about the probability that they’d affect the main outcome. Sometimes they’re “doing their part”, or it’s a matter of identity or signaling. Those aren’t necessarily bad reasons. But they’re not even bothering to check whether it would be Pascalian.
EDIT: I’d also guess the people self-selecting into doing this work, especially without thinking about the probabilities, would have high implied probabilities of affecting the main binary-ish outcome, if we interpreted them as primarily concerned with that.
And I think this usually means some factors, in their units, like scale (e.g. number of individuals, years of life, DALYs amount of suffering) and probability of success (%), should be multiplied. And usually not weighted at all, except when you want to calculate a factor multiple ways and average them. Otherwise, you’ll typically get weird units.
And what is the unit conversion between DALYs and a % chance of success, say? This doesn’t make much sense, and probably neither will any weights, in a weighted sum. Adding factors with different units together doesn’t make much sense if you wanted to interpret the final results in a scope-sensitive way.
This all makes most sense if you only have one effect you’re estimating, e.g. one direct effect and no indirect effects. Different effects should be added. A more complete model could then be the sum of multiplicative models, one multiplicative model for each effect.
EDIT: But also BOTECs and multiplicative models may be more sensitive to their factors, and more sensitive to errors in factor values when ranking. So, it may be best to do sensitivity analysis, with a range of values for the factors. But that’s more work.