While I was at RP, we wrote about a similar hypothesis here.
This excerpt is the one Iād personally highlight as reason for skepticism:
Abstracting away from fruit fly brains, itās likely that some functions required for consciousness or valenceāor realized along the way to generate conscious valenceāare fairly high-order, top-down, highly integrative, bottlenecking, or approximately unitary, and some of these are very unlikely to be realized thousands of times in any given brain. Some candidate functions are selective attention,[11] a model of attention,[12] various executive functions, optimism and pessimism bias, and (non-reflexive) appetitive and avoidance behaviors. Some kinds of valenced experiences, like empathic pains and social pains from rejection, exclusion, or loss, depend on high-order representations of stimuli, and these representations seem likely to be accessible or relatively few in number at a time, so we expect the same to hold for the negative valence that depends on them. Physical pain and even negative valence generally may also turn out to depend on high-order representations, and thereās some evidence they depend on brain regions similar to those on which empathic pains and social pains depend (Singer et al., 2004, Eisenberger, 2015). On the other hand, if some kinds of valenced experiences occur simultaneously in huge numbers in the human brain, but social pains donāt, then, unless these many valenced experiences have tiny average value relative to social pains, they would morally dominate the individualās social pains in aggregate, which would at least be morally counterintuitive, although possibly an inevitable conclusion of Conscious Subsystems.
I cited your post (at the end of the 2nd paragraph of āHow these implications are revisionaryā) as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.
Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?
I think some of the same arguments in our post, including my quoted excerpt, apply if you instead think of counting multiple valenced (pleasurable, unpleasant) components (or āsub-experiencesā) of one experience. I had thought of having more valenced components like having a visual field with more detail, but that didnāt make it into publication.
Sensations are (often) ālocation-specificā. Your visual field, for example, has many different sensations simultaneously, organized spatially.
To add to what I already wrote, I think the case for there being many many accessible valenced components simultanously is weak:
I donāt think thereās any scientific evidence for it.
It would be resource-costly to not use the same structures that generate valence in a location-independent way. We donāt need to rerepresent location information already captured by the sensory components.
There is some evidence that we do use these structures in location-independent ways, because the same structures are involved in physical pains, empathic pains (without painful sensation) and social pains, which can involve totally different mapped locations and maybe no location mapping at all.
If this is right, then I donāt see āexperience sizeā varying much hedonically across animals.
(If you were instead thinking of one valenced component associated with many non-valenced sensory (or otherwise experiential) components, then I doubt that this would matter more on hedonism. There isnāt more pleasure or suffering or whatever just because there are more inputs.)
Ah wait, did your first comment always say āsimilarā? No worries if not (I often edit stuff just after posting!) but if so, I must have missed itāapologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.
But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly āintegrativeā field of hedonic intensities, just as I donāt doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.
Thanks for the second comment though! Itās interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but Iām still mostly left thinking - Re 1, we donāt need to appeal to scientific evidence about whether itās possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If thatās somehow an illusion, itās the illusion that needs a lot of scientific evidence to debunk. - Re 2, itās not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesnāt help here. But assuming that valence evolved to motivate us in adaptive ways, it doesnāt seem like such a stretch to me to say that forming the feeling āmy hand is on fire and it in particular hurtsā shapes our motivations in the right direction more effectively than forming the feeling āmy hand is on fire and Iāve just started feeling bad overall for some reasonā, and that this is worth whatever costs come with producing a field of valences. - Re 3, the proposal I call (iii*) and try to defend is āthat the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal fieldā (emphasis added). I put in the āincorporatesā because I donāt mean to take a stand on whether there are also things that contribute to welfare that donāt correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some ālocation-dependentā pains; and if so, I would think that these can scale with āsizeā.
Ah wait, did your first comment always say āsimilarā? No worries if not (I often edit stuff just after posting!) but if so, I must have missed itāapologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.
Thereās a good chance I edited that in, but I donāt remember for sure.
Re 1, we donāt need to appeal to scientific evidence about whether itās possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If thatās somehow an illusion, itās the illusion that needs a lot of scientific evidence to debunk.
I donāt think this is an illusion. However, my understanding of the literature is that pain has 3 components: sensory, affective (unpleasantness) and motivational (aversive desire, motivational salience, how it pulls attention). The sensory component is location-specific and like a field. The affective component seems not like a field, imo, but this is not settled, AFAIK. The motivational component is (in part) the pull of your attention to the motivationally salient parts of your sensory field. It selects and amplifies signals from your sensory field.
it doesnāt seem like such a stretch to me to say that forming the feeling āmy hand is on fire and it in particular hurtsā shapes our motivations in the right direction more effectively than forming the feeling āmy hand is on fire and Iāve just started feeling bad overall for some reasonā, and that this is worth whatever costs come with producing a field of valences.
I think the mechanism of motivational salience could already account for this. You donāt need a field of valences, just for your attention to be pulled to the right parts of your sensory field.
While I was at RP, we wrote about a similar hypothesis here.
This excerpt is the one Iād personally highlight as reason for skepticism:
And I expanded a bit more here.
I cited your post (at the end of the 2nd paragraph of āHow these implications are revisionaryā) as an exploration of a different idea from mine, namely that one brain might have more moral weight than another because it contains more experiences at once. Your excerpt seems to highlight this different idea.
Are you saying your post should be read as also exploring the idea that one brain might have more moral weight than another even if they each contain one experience, because one experience is larger than the other? If so, can you point me to the relevant bit?
I think some of the same arguments in our post, including my quoted excerpt, apply if you instead think of counting multiple valenced (pleasurable, unpleasant) components (or āsub-experiencesā) of one experience. I had thought of having more valenced components like having a visual field with more detail, but that didnāt make it into publication.
Sensations are (often) ālocation-specificā. Your visual field, for example, has many different sensations simultaneously, organized spatially.
To add to what I already wrote, I think the case for there being many many accessible valenced components simultanously is weak:
I donāt think thereās any scientific evidence for it.
It would be resource-costly to not use the same structures that generate valence in a location-independent way. We donāt need to rerepresent location information already captured by the sensory components.
There is some evidence that we do use these structures in location-independent ways, because the same structures are involved in physical pains, empathic pains (without painful sensation) and social pains, which can involve totally different mapped locations and maybe no location mapping at all.
If this is right, then I donāt see āexperience sizeā varying much hedonically across animals.
(If you were instead thinking of one valenced component associated with many non-valenced sensory (or otherwise experiential) components, then I doubt that this would matter more on hedonism. There isnāt more pleasure or suffering or whatever just because there are more inputs.)
Ah wait, did your first comment always say āsimilarā? No worries if not (I often edit stuff just after posting!) but if so, I must have missed itāapologies for just pointing out that they were different points and not addressing whether they are sufficiently similar.
But they do seem like significantly different hypotheses to me. The reason is that it seems like the arguments presented against many experiences in a single brain can convince me that there is probably (something like) a single, highly āintegrativeā field of hedonic intensities, just as I donāt doubt that there is a single visual processing system behind my single visual field, and yet leave me fully convinced that both fields can come in different sizes, so that one brain can have higher welfare capacity than another for size reasons.
Thanks for the second comment though! Itās interesting, and to my mind more directly relevant, in that it offers reasons to doubt the idea that hedonic intensities are spread across locations at all. They move me a bit, but Iām still mostly left thinking
- Re 1, we donāt need to appeal to scientific evidence about whether itās possible to have different amounts of, say, pain in different parts of the phenomenal field. It happens all the time that we feel pain in one hand but not the other. If thatās somehow an illusion, itās the illusion that needs a lot of scientific evidence to debunk.
- Re 2, itās not clear why we would have evolved to create valence (or experience) at all in the first place, so in some sense the fact that it would evidently be more efficient to have less of it doesnāt help here. But assuming that valence evolved to motivate us in adaptive ways, it doesnāt seem like such a stretch to me to say that forming the feeling āmy hand is on fire and it in particular hurtsā shapes our motivations in the right direction more effectively than forming the feeling āmy hand is on fire and Iāve just started feeling bad overall for some reasonā, and that this is worth whatever costs come with producing a field of valences.
- Re 3, the proposal I call (iii*) and try to defend is āthat the welfare of a whole experience is, or at least monotonically incorporates, some monotonic aggregation of the hedonic intensities felt in these different parts of the phenomenal fieldā (emphasis added). I put in the āincorporatesā because I donāt mean to take a stand on whether there are also things that contribute to welfare that donāt correspond to particular locations in the phenomenal field, like perhaps the social pains you mention. I just find it hard to deny from first-hand experience that there are some ālocation-dependentā pains; and if so, I would think that these can scale with āsizeā.
Thereās a good chance I edited that in, but I donāt remember for sure.
I donāt think this is an illusion. However, my understanding of the literature is that pain has 3 components: sensory, affective (unpleasantness) and motivational (aversive desire, motivational salience, how it pulls attention). The sensory component is location-specific and like a field. The affective component seems not like a field, imo, but this is not settled, AFAIK. The motivational component is (in part) the pull of your attention to the motivationally salient parts of your sensory field. It selects and amplifies signals from your sensory field.
I think the mechanism of motivational salience could already account for this. You donāt need a field of valences, just for your attention to be pulled to the right parts of your sensory field.