While I was at RP, we wrote about a similar hypothesis here.
This excerpt is the one I’d personally highlight as reason for skepticism:
Abstracting away from fruit fly brains, it’s likely that some functions required for consciousness or valence—or realized along the way to generate conscious valence—are fairly high-order, top-down, highly integrative, bottlenecking, or approximately unitary, and some of these are very unlikely to be realized thousands of times in any given brain. Some candidate functions are selective attention,[11] a model of attention,[12] various executive functions, optimism and pessimism bias, and (non-reflexive) appetitive and avoidance behaviors. Some kinds of valenced experiences, like empathic pains and social pains from rejection, exclusion, or loss, depend on high-order representations of stimuli, and these representations seem likely to be accessible or relatively few in number at a time, so we expect the same to hold for the negative valence that depends on them. Physical pain and even negative valence generally may also turn out to depend on high-order representations, and there’s some evidence they depend on brain regions similar to those on which empathic pains and social pains depend (Singer et al., 2004, Eisenberger, 2015). On the other hand, if some kinds of valenced experiences occur simultaneously in huge numbers in the human brain, but social pains don’t, then, unless these many valenced experiences have tiny average value relative to social pains, they would morally dominate the individual’s social pains in aggregate, which would at least be morally counterintuitive, although possibly an inevitable conclusion of Conscious Subsystems.
I cited your post (at the end of the 2nd paragraph of “How these implications are revisionary”) as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.
Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?
I think some of the same arguments in our post, including my quoted excerpt, apply if you instead think of counting multiple valenced (pleasurable, unpleasant) components (or “sub-experiences”) of one experience. I had thought of having more valenced components like having a visual field with more detail, but that didn’t make it into publication.
Sensations are (often) “location-specific”. Your visual field, for example, has many different sensations simultaneously, organized spatially.
To add to what I already wrote, I think the case for there being many many accessible valenced components simultanously is weak:
I don’t think there’s any scientific evidence for it.
It would be resource-costly to not use the same structures that generate valence in a location-independent way. We don’t need to rerepresent location information already captured by the sensory components.
There is some evidence that we do use these structures in location-independent ways, because the same structures are involved in physical pains, empathic pains (without painful sensation) and social pains, which can involve totally different mapped locations and maybe no location mapping at all.
If this is right, then I don’t see “experience size” varying much hedonically across animals.
(If you were instead thinking of one valenced component associated with many non-valenced sensory (or otherwise experiential) components, then I doubt that this would matter more on hedonism. There isn’t more pleasure or suffering or whatever just because there are more inputs.)
Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it—apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.
But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly “integrative” field of hedonic intensities, just as I don’t doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.
Thanks for the second comment though! It’s interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but I’m still mostly left thinking - Re 1, we don’t need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that’s somehow an illusion, it’s the illusion that needs a lot of scientific evidence to debunk. - Re 2, it’s not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesn’t help here. But assuming that valence evolved to motivate us in adaptive ways, it doesn’t seem like such a stretch to me to say that forming the feeling “my hand is on fire and it in particular hurts” shapes our motivations in the right direction more effectively than forming the feeling “my hand is on fire and I’ve just started feeling bad overall for some reason”, and that this is worth whatever costs come with producing a field of valences. - Re 3, the proposal I call (iii*) and try to defend is “that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field” (emphasis added). I put in the “incorporates” because I don’t mean to take a stand on whether there are also things that contribute to welfare that don’t correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some “location-dependent” pains; and if so, I would think that these can scale with “size”.
Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it—apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.
There’s a good chance I edited that in, but I don’t remember for sure.
Re 1, we don’t need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that’s somehow an illusion, it’s the illusion that needs a lot of scientific evidence to debunk.
I don’t think this is an illusion. However, my understanding of the literature is that pain has 3 components: sensory, affective (unpleasantness) and motivational (aversive desire, motivational salience, how it pulls attention). The sensory component is location-specific and like a field. The affective component seems not like a field, imo, but this is not settled, AFAIK. The motivational component is (in part) the pull of your attention to the motivationally salient parts of your sensory field. It selects and amplifies signals from your sensory field.
it doesn’t seem like such a stretch to me to say that forming the feeling “my hand is on fire and it in particular hurts” shapes our motivations in the right direction more effectively than forming the feeling “my hand is on fire and I’ve just started feeling bad overall for some reason”, and that this is worth whatever costs come with producing a field of valences.
I think the mechanism of motivational salience could already account for this. You don’t need a field of valences, just for your attention to be pulled to the right parts of your sensory field.
While I was at RP, we wrote about a similar hypothesis here.
This excerpt is the one I’d personally highlight as reason for skepticism:
And I expanded a bit more here.
I cited your post (at the end of the 2nd paragraph of “How these implications are revisionary”) as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.
Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?
I think some of the same arguments in our post, including my quoted excerpt, apply if you instead think of counting multiple valenced (pleasurable, unpleasant) components (or “sub-experiences”) of one experience. I had thought of having more valenced components like having a visual field with more detail, but that didn’t make it into publication.
Sensations are (often) “location-specific”. Your visual field, for example, has many different sensations simultaneously, organized spatially.
To add to what I already wrote, I think the case for there being many many accessible valenced components simultanously is weak:
I don’t think there’s any scientific evidence for it.
It would be resource-costly to not use the same structures that generate valence in a location-independent way. We don’t need to rerepresent location information already captured by the sensory components.
There is some evidence that we do use these structures in location-independent ways, because the same structures are involved in physical pains, empathic pains (without painful sensation) and social pains, which can involve totally different mapped locations and maybe no location mapping at all.
If this is right, then I don’t see “experience size” varying much hedonically across animals.
(If you were instead thinking of one valenced component associated with many non-valenced sensory (or otherwise experiential) components, then I doubt that this would matter more on hedonism. There isn’t more pleasure or suffering or whatever just because there are more inputs.)
Ah wait, did your first comment always say “similar”? No worries if not (I often edit stuff just after posting!) but if so, I must have missed it—apologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.
But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly “integrative” field of hedonic intensities, just as I don’t doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.
Thanks for the second comment though! It’s interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but I’m still mostly left thinking
- Re 1, we don’t need to appeal to scientific evidence about whether it’s possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If that’s somehow an illusion, it’s the illusion that needs a lot of scientific evidence to debunk.
- Re 2, it’s not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesn’t help here. But assuming that valence evolved to motivate us in adaptive ways, it doesn’t seem like such a stretch to me to say that forming the feeling “my hand is on fire and it in particular hurts” shapes our motivations in the right direction more effectively than forming the feeling “my hand is on fire and I’ve just started feeling bad overall for some reason”, and that this is worth whatever costs come with producing a field of valences.
- Re 3, the proposal I call (iii*) and try to defend is “that the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal field” (emphasis added). I put in the “incorporates” because I don’t mean to take a stand on whether there are also things that contribute to welfare that don’t correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some “location-dependent” pains; and if so, I would think that these can scale with “size”.
There’s a good chance I edited that in, but I don’t remember for sure.
I don’t think this is an illusion. However, my understanding of the literature is that pain has 3 components: sensory, affective (unpleasantness) and motivational (aversive desire, motivational salience, how it pulls attention). The sensory component is location-specific and like a field. The affective component seems not like a field, imo, but this is not settled, AFAIK. The motivational component is (in part) the pull of your attention to the motivationally salient parts of your sensory field. It selects and amplifies signals from your sensory field.
I think the mechanism of motivational salience could already account for this. You don’t need a field of valences, just for your attention to be pulled to the right parts of your sensory field.