There is some reason to believe that virtually everyone is too animal-unfriendly, including animal welfare advocates:
Everyone is a human. Humans will naturally be biased toward humans over other species.
An uninformed prior says all individuals have equal moral weight. Virtually all people—including animal advocates—give humans higher weight than any other species, which is definitely a bias in a technical sense, if not in the sense people usually use the word.
An uninformed prior says all individuals have equal moral weight
That’s one way of constructing an uninformed prior, but that seems quite a bit worse than starting from a place of equal moral weight among cells, or perhaps atoms or neurons. All of which would give less animal friendly results, though still more animal-friendly results than mainstream human morality.
(And of course this is just a prior, and our experience of the world can bring us quite a long way from whichever prior we think is most natural.)
Cells, atoms and neurons aren’t conscious entities in themselves. I see no principled reason for going to that level for an uninformed prior.
A trueuninformed prior would probably say “I have no idea” but if we’re going to have some idea it seems more natural to start at all sentient individuals having equal weight. The individual is the level at which conscious experience happens, not the cell/atom/neuron.
Why do you think the individual is the level at which conscious experience happens?
(I tend to imagine that it happens at a range of scales, including both smaller-than-individual and bigger-than-individual. I don’t see why we should generalise from our experience to the idea that individual organisms are the right boundary to draw. I put some reasonable weight on some small degree of consciousness occurring at very small levels like neurons, although that’s more like “my intuitive concept of consciousness wasn’t expansive enough, and the correct concept extends here”).
To be honest I’m not very well-read on theories of consciousness.
I don’t see why we should generalise from our experience to the idea that individual organisms are the right boundary to draw.
For an uninformed prior that isn’t “I have no idea” (and I suppose you could say I’m uninformed myself!) I don’t think we have much of an option but to generalise from experience. Being able to say it might happen at other levels seems a bit too “informed” to me.
IDK, structurally your argument here reminds me of arguments that we shouldn’t assume animals are conscious, since we can only generalise from human experiences. (In both cases I feel like there’s not nothing to the argument, but I’m overall pretty uncompelled.)
How far and how to generalize for an uninformed prior is pretty unclear. I could say just generalize to other human males because I can’t experience being female. I could say generalize to other humans because I can’t experience being another species. I could say generalize to only living things because I can’t experience not being a living thing.
If you’re truly uniformed I don’t think you can really generalize at all. But in my current relatively uninformed state I generalize to those that are biologically similar to humans (e.g. central nervous system) as I’m aware of research about the importance of this type of biology within humans for elements of consciousness. I also generalize to other entities that act in a similar way to me when in supposed pain (try to avoid it, cry out, bleed annd become less physically capable etc.).
I don’t think you should give 0 probability to individual cells being conscious, because then no evidence or argument could move you away from that, if you’re a committed Bayesian. I don’t know what an uninformed prior could look like. I imagine there isn’t one. It’s the reference class problem.
You should even be uncertain about the fundamental nature of reality. Maybe things more basic than fundamental particles, like strings. Or maybe something else. They could be conscious or not, and they may not exist at all.
Yeah if I were to translate that into a quantitative prior I suppose it would be that other individuals have roughly 50% of being conscious (I.e. I’m agnostic on if they are or not).
Then I learn about the world. I learn about the importance of certain biological structures for consciousness. I learn that I act in a certain way when in pain and notice other individuals do as well etc. That’s how I get my posterior that rocks probably aren’t conscious and pigs probably are.
What do you count as “other individual”? Any physical system, including overlapping ones? What about your brain, and your brain but not counting one electron?
I’m a bit confused if I’m supposed to be answering on the basis of my uninformed prior or some slightly informed prior or even my posterior here. Like I’m not sure how much you want me to answer based on my experience of the world.
For an uninformed prior I suppose any individual entity that I can visually see. I see a rock and I think “that could possibly be conscious”. I don’t lump the rock with another nearby rock and think maybe that ‘double rock’ is conscious because they just visually appear to me to be independent entities as they are not really visually connected in any physical way. This obviously does factor in some knowledge of the world so I suppose it isn’t a strict uninformed prior, but I suppose it’s about as uninformed as is useful to talk about?
That’s an interesting argument thank you, and I think d others on the RP team might agree. Its a reasonable perspective to have.
I agree that 99.9% of people are likely to be too animal unfriendly, but personally (obviously from this post) think that probably animal welfare advocates are more likely to swing the other way given the strong incentives to be able to better advocate for animals (understandable), publishing impact, and just being deep in the animal thought world.
I agree that an uninformed prior would say that all individuals have equal moral weight, but I think we have a lot of information here so I’m not sure why that’s super relevant here? Maybe I’m missing something.
Jeff Kaufmann raised some similar points to the ‘animal-friendly researchers’ issue here, and there was some extended discussion in the comments there you might be interested in Nick!
Thanks now that you mention I do remember that and that might have partly triggered my thinking about that juncture, FWIW I’ll add a reference to that in.
There is some reason to believe that virtually everyone is too animal-unfriendly, including animal welfare advocates:
Everyone is a human. Humans will naturally be biased toward humans over other species.
An uninformed prior says all individuals have equal moral weight. Virtually all people—including animal advocates—give humans higher weight than any other species, which is definitely a bias in a technical sense, if not in the sense people usually use the word.
That’s one way of constructing an uninformed prior, but that seems quite a bit worse than starting from a place of equal moral weight among cells, or perhaps atoms or neurons. All of which would give less animal friendly results, though still more animal-friendly results than mainstream human morality.
(And of course this is just a prior, and our experience of the world can bring us quite a long way from whichever prior we think is most natural.)
Cells, atoms and neurons aren’t conscious entities in themselves. I see no principled reason for going to that level for an uninformed prior.
A true uninformed prior would probably say “I have no idea” but if we’re going to have some idea it seems more natural to start at all sentient individuals having equal weight. The individual is the level at which conscious experience happens, not the cell/atom/neuron.
Why do you think the individual is the level at which conscious experience happens?
(I tend to imagine that it happens at a range of scales, including both smaller-than-individual and bigger-than-individual. I don’t see why we should generalise from our experience to the idea that individual organisms are the right boundary to draw. I put some reasonable weight on some small degree of consciousness occurring at very small levels like neurons, although that’s more like “my intuitive concept of consciousness wasn’t expansive enough, and the correct concept extends here”).
To be honest I’m not very well-read on theories of consciousness.
For an uninformed prior that isn’t “I have no idea” (and I suppose you could say I’m uninformed myself!) I don’t think we have much of an option but to generalise from experience. Being able to say it might happen at other levels seems a bit too “informed” to me.
IDK, structurally your argument here reminds me of arguments that we shouldn’t assume animals are conscious, since we can only generalise from human experiences. (In both cases I feel like there’s not nothing to the argument, but I’m overall pretty uncompelled.)
How far and how to generalize for an uninformed prior is pretty unclear. I could say just generalize to other human males because I can’t experience being female. I could say generalize to other humans because I can’t experience being another species. I could say generalize to only living things because I can’t experience not being a living thing.
If you’re truly uniformed I don’t think you can really generalize at all. But in my current relatively uninformed state I generalize to those that are biologically similar to humans (e.g. central nervous system) as I’m aware of research about the importance of this type of biology within humans for elements of consciousness. I also generalize to other entities that act in a similar way to me when in supposed pain (try to avoid it, cry out, bleed annd become less physically capable etc.).
I don’t think you should give 0 probability to individual cells being conscious, because then no evidence or argument could move you away from that, if you’re a committed Bayesian. I don’t know what an uninformed prior could look like. I imagine there isn’t one. It’s the reference class problem.
You should even be uncertain about the fundamental nature of reality. Maybe things more basic than fundamental particles, like strings. Or maybe something else. They could be conscious or not, and they may not exist at all.
I certainly don’t put 0 probability on that possibility.
I agree uninformed prior may not be a useful concept here. I think the true uninformed prior is “I have no idea what is conscious other than myself”.
I don’t think that gives you can actual proper quantitative prior, as a probability distribution.
Yeah if I were to translate that into a quantitative prior I suppose it would be that other individuals have roughly 50% of being conscious (I.e. I’m agnostic on if they are or not).
Then I learn about the world. I learn about the importance of certain biological structures for consciousness. I learn that I act in a certain way when in pain and notice other individuals do as well etc. That’s how I get my posterior that rocks probably aren’t conscious and pigs probably are.
Ok, this makes more sense.
What do you count as “other individual”? Any physical system, including overlapping ones? What about your brain, and your brain but not counting one electron?
I’m a bit confused if I’m supposed to be answering on the basis of my uninformed prior or some slightly informed prior or even my posterior here. Like I’m not sure how much you want me to answer based on my experience of the world.
For an uninformed prior I suppose any individual entity that I can visually see. I see a rock and I think “that could possibly be conscious”. I don’t lump the rock with another nearby rock and think maybe that ‘double rock’ is conscious because they just visually appear to me to be independent entities as they are not really visually connected in any physical way. This obviously does factor in some knowledge of the world so I suppose it isn’t a strict uninformed prior, but I suppose it’s about as uninformed as is useful to talk about?
That’s an interesting argument thank you, and I think d others on the RP team might agree. Its a reasonable perspective to have.
I agree that 99.9% of people are likely to be too animal unfriendly, but personally (obviously from this post) think that probably animal welfare advocates are more likely to swing the other way given the strong incentives to be able to better advocate for animals (understandable), publishing impact, and just being deep in the animal thought world.
I agree that an uninformed prior would say that all individuals have equal moral weight, but I think we have a lot of information here so I’m not sure why that’s super relevant here? Maybe I’m missing something.
Jeff Kaufmann raised some similar points to the ‘animal-friendly researchers’ issue here, and there was some extended discussion in the comments there you might be interested in Nick!
Thanks now that you mention I do remember that and that might have partly triggered my thinking about that juncture, FWIW I’ll add a reference to that in.