Feel free to message me on here.
JackM
Conditional on fish actually being able to feel pain, it seems a bit far-fetched to me that a slow death in ice wouldn’t be painful.
I was trying to question you on the duration aspect specifically. If electric shock lasts a split second is it really credible that it could be worse than a slow death through some other method?
though I’ll happily concede it’s a longer process than electrical stunning
Isn’t this pretty key? If “Electrical stunning reliably renders fish unconscious in less than one second” as Vasco says, I don’t see how you can get much better than that in terms of humane slaughter.
Or are you saying that electrical stunning is plausibly so bad even in that split second so as to make it potentially worse than a much slower death from freezing?
I’m a bit confused if I’m supposed to be answering on the basis of my uninformed prior or some slightly informed prior or even my posterior here. Like I’m not sure how much you want me to answer based on my experience of the world.
For an uninformed prior I suppose any individual entity that I can visually see. I see a rock and I think “that could possibly be conscious”. I don’t lump the rock with another nearby rock and think maybe that ‘double rock’ is conscious because they just visually appear to me to be independent entities as they are not really visually connected in any physical way. This obviously does factor in some knowledge of the world so I suppose it isn’t a strict uninformed prior, but I suppose it’s about as uninformed as is useful to talk about?
Yeah if I were to translate that into a quantitative prior I suppose it would be that other individuals have roughly 50% of being conscious (I.e. I’m agnostic on if they are or not).
Then I learn about the world. I learn about the importance of certain biological structures for consciousness. I learn that I act in a certain way when in pain and notice other individuals do as well etc. That’s how I get my posterior that rocks probably aren’t conscious and pigs probably are.
I certainly don’t put 0 probability on that possibility.
I agree uninformed prior may not be a useful concept here. I think the true uninformed prior is “I have no idea what is conscious other than myself”.
How far and how to generalize for an uninformed prior is pretty unclear. I could say just generalize to other human males because I can’t experience being female. I could say generalize to other humans because I can’t experience being another species. I could say generalize to only living things because I can’t experience not being a living thing.
If you’re truly uniformed I don’t think you can really generalize at all. But in my current relatively uninformed state I generalize to those that are biologically similar to humans (e.g. central nervous system) as I’m aware of research about the importance of this type of biology within humans for elements of consciousness. I also generalize to other entities that act in a similar way to me when in supposed pain (try to avoid it, cry out, bleed annd become less physically capable etc.).
To be honest I’m not very well-read on theories of consciousness.
I don’t see why we should generalise from our experience to the idea that individual organisms are the right boundary to draw.
For an uninformed prior that isn’t “I have no idea” (and I suppose you could say I’m uninformed myself!) I don’t think we have much of an option but to generalise from experience. Being able to say it might happen at other levels seems a bit too “informed” to me.
Most EAs I speak to seem to have similarly-sized bugbears?
Maybe I don’t speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isn’t optimal, but I wasn’t aware that many EAs think we are giving tens of millions of dollars to interventions/areas that do NO good in expectation (which is what I mean by “burning money”).
Maybe the burning money point is a bit of a red herring though if the amount you’re burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who don’t think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. I’d love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Ariel’s on the topic of animal welfare vs global health.
It’s the only part of my comment that argues Jeff was effectively saying he would have to “be” an animal advocate, which is exactly what you’re arguing against.
So I guess my best reply is just to point you back to that...
Oh well, was nice chatting.
You seem to have ignored the bit I made in bold in my previous comment
OK so he says he would primarily be “known” as an anti-animal advocate not “become” one.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
But he then also says the following (bold emphasis mine):
I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
I’m struggling to see how what I said isn’t accurate. Maybe Jeff should have said “I would feel compelled to” rather than “I would need to”.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it. For example, I don’t think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course you’re not under any obligation to write anything (well...perhaps some would argue you are, but I’ll concede you’re not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know I’d write it up.
Thanks for your response.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
I’d be interested to know how likely you think it is that you could do a “good job”. You say you have a “bundle of intuitions and thoughts” which doesn’t seem like much to me.
I’m also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a “bundle of intuitions and thoughts” on what is ultimately a very difficult and important question.[1] In your original comment you say “This isn’t as deeply a considered view as I’d like”. Were you saying you haven’t considered deeply enough or that the general community hasn’t?
And thanks for the sketch of your reasoning but ultimately I don’t think it’s very helpful without some justification for claims like the following:
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
- ^
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/moral patienthood are pretty strong (e.g. see here for a summary) and I would not say I’m relying on intuition. I’m not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals aren’t moral patients, I think you probably need to be very confident of this to counteract the vast numbers of animals that can be helped.
Animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get “fuzzies” from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldn’t as a human species then be committing a grave moral atrocity which would be a massive relief.
There aren’t really similar arguments for wanting animals to be moral patients (other than “I work on animal welfare”) but I would be interested if I’m missing any relevant ones.
Hierarchicalism, as Ariel presents it, is based solely on species membership, where humans are prioritized simply because they are humans. See here (bold emphasis mine):
Solely by virtue of our shared species, helping humans may be lexicographically preferential to helping animals
So, the argument you’re making about mind complexity and behavior goes beyond the species-based hierarchicalism Ariel refers to:
We know conclusively that human experience is the same. On the animal front there are very many datapoints (mind complexity, brain size, behavior) which are priors that at least push us towards some kind of heirachialism.
While I understand the discomfort with the Aryan vs. non-Aryan analogy, striking analogies like this can sometimes help expose problematic reasoning. I feel like it’s a common approach in moral philosophy. But, I recognize that these comparisons are emotionally charged, and it’s important to use them carefully to avoid alienating others.
Thank you! Links in articles such as this just weren’t working.
This is the relevant David Pearce comment I was referring to which Yudkowsky just ignored despite continuing to respond to less challenging comments:
Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.
Anyone who cares about sentience-friendly intelligence should not harm our fellow subjects of experience. Shutting down factory farms and slaughterhouses will eliminate one of the world’s worst forms of severe and readily avoidable suffering.
FWIW this seems wrong, not least because as was correctly pointed out many times there just isn’t a lot of money in the AW space. I’m pretty sure GHD has far better places to fundraise from.
This is bizarre to me. This post suggests that between $30 and 40 million goes towards animal welfare each year (and it could be more now as that post was written four years ago). If animals are not moral patients, this money is as good as getting burned. If we actually were burning this amount of money every year, I’d imagine some people would make it their overwhelming mission to ensure we don’t (which would likely involve at least a few forum posts).
Assuming it costs $5,000 to save a human life, redirecting that money could save up to 8,000 human lives every year. Doesn’t seem too bad to me. I’m not claiming posts arguing against animal moral patienthood could lead to redirecting all the money, but the idea that no one is bothering to make the arguments because there’s just no point doesn’t stack up to me.
I’m not sure the middle step does actually fail in the EA community. Do you have evidence that it does? Is there some survey evidence for significant numbers of EAs not believing animals are moral patients?
If there is a significant number of people that think they have strong arguments for animals not counting, they should definitely post these and potentially redirect a great deal of altruistic funding towards global health.
Anyway, another possible causal chain might be:
‘argument is weak but some people intuitively believe it in part because they want it to be true’ → ‘there is no strong post that can really be written’ → ‘nobody posts it’
Maybe you can ask Jeff Kauffman why he has never provided any actual argument for this (I do apologize if he has and I have just missed it!).
Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course I would do that, but we probably can’t.
Ultimately the great value from a longtermist intervention does comes from comparing it to the state of the world that would have happened otherwise. If we can lock-in value 5 instead of locking in value 3, that is better than if we can lock-in value 9 instead of locking value 8.
I think we just have different intuitions here. The future will be different, but I think we can make reasonable guesses about what will be good. For example, I don’t have a problem with a claim that a future where people care about the wellbeing of sentient creatures is likely to be better than one where they don’t. If so, expanding our moral circle seems important in expectation. If you’re asking “why”—it’s because people who care about the wellbeing of sentient creatures are more likely to treat them well and therefore more likely to promote happiness over suffering. They are also therefore less likely to lock-in suffering. And fundamentally I think happiness is inherently good and suffering inherently bad and this is independent of what future people think. I don’t have a problem with reasoning like this, but if you do then I just think our intuitions diverge too much here.
Maybe fair, but if that’s the case I think we need to find those interventions that are not very ambiguous. Moral circle expansion seems one of those that is very hard to argue against. (I know I’m changing my interventions—it doesn’t mean I don’t think the previous ones I said are still good, I’m just trying to see how far your scepticism goes).
Considering this particular example—If we spread out to the stars then x-risk from asteroids drops considerably as no one asteroid can kill us all—that is true. But the value of the asteroid reduction intervention is borne from actually getting us to that point in the first place. If we hadn’t reduced risk from asteroids and had gone extinct then we’d have value 0 for the rest of time. If we can avert that and become existentially secure than we have non-zero value for the rest of the time. So yes, we would have indeed done an intervention that has impacts enduring for the rest time. X-risk reduction interventions are trying to get us to a point of existential security. If they do that, their work is done.