Regarding whether to focus on numbers or biomass, I think the following articles could be relevant:
Brian Tomasik’s Is Brain Size Morally Relevant? explores arguments for brain-size/complexity weighting and for ‘equality weighting’ (equal consideration of systems). Despite the title, it’s not just brain size that’s discussed in the article but sometimes also mental complexity (“Note: In this piece, I unfortunately conflate “brain size” with “mental complexity” in a messy way. Some of the arguments I discuss apply primarily to size, while some apply primarily to complexity.”).
The question of whether complexity matters is related to the intensity of experiences. Jason Schukraft’s Differences in the Intensity of Valenced Experience across Species explores this and seems to conclude that for less familiar systems (well, animals—I think only animals are explored in that piece) it’s really unclear whether the “intensity range” is larger or smaller. They could potentially suffer more intensely. They could potentially suffer less intensely.
Another article by Brian Tomasik: Fuzzy, Nested Minds Problematize Utilitarian Aggregation raises the question of how to subset/divide physical reality into minds. Like Brian, the approach that’s most intuitive for me is to “Sum over all sufficiently individuated “objects”″ and count subsystems (e.g. suffering subroutines) too. Although, I think the question is still super confusing. It’s also written in the article that “[p]erforming any exact computations seem intractable for now” although I don’t know enough math to say.
I think it’s ultimately all quite subjective though (which doesn’t mean it’s not important!). For me, I don’t see how we can definitively show that some approach is correct. (Brian Tomasik describes this problem as a “moral question” and frames it as there’s “[n]o objective answer” but I feel like his language is slightly confusing. Rather, I think there is an answer regarding suffering/sentience, but it’s just that we might not ever have the tools to know that we’ve reached it. In the absence of those tools, a lot of what we’re doing when saying “there’s more suffering here” or “there’s more suffering there” might be or might be compared to a discussion of morality.) We’re also using a bunch of intuitions that were shaped by factors that don’t necessarily guide us to truths about suffering/sentience (I explore this in the Factors affecting how we attribute suffering section in my recent microorganism post).
Regarding whether to focus on numbers or biomass, I think the following articles could be relevant:
Brian Tomasik’s Is Brain Size Morally Relevant? explores arguments for brain-size/complexity weighting and for ‘equality weighting’ (equal consideration of systems). Despite the title, it’s not just brain size that’s discussed in the article but sometimes also mental complexity (“Note: In this piece, I unfortunately conflate “brain size” with “mental complexity” in a messy way. Some of the arguments I discuss apply primarily to size, while some apply primarily to complexity.”).
The question of whether complexity matters is related to the intensity of experiences. Jason Schukraft’s Differences in the Intensity of Valenced Experience across Species explores this and seems to conclude that for less familiar systems (well, animals—I think only animals are explored in that piece) it’s really unclear whether the “intensity range” is larger or smaller. They could potentially suffer more intensely. They could potentially suffer less intensely.
Another article by Brian Tomasik: Fuzzy, Nested Minds Problematize Utilitarian Aggregation raises the question of how to subset/divide physical reality into minds. Like Brian, the approach that’s most intuitive for me is to “Sum over all sufficiently individuated “objects”″ and count subsystems (e.g. suffering subroutines) too. Although, I think the question is still super confusing. It’s also written in the article that “[p]erforming any exact computations seem intractable for now” although I don’t know enough math to say.
I think it’s ultimately all quite subjective though (which doesn’t mean it’s not important!). For me, I don’t see how we can definitively show that some approach is correct. (Brian Tomasik describes this problem as a “moral question” and frames it as there’s “[n]o objective answer” but I feel like his language is slightly confusing. Rather, I think there is an answer regarding suffering/sentience, but it’s just that we might not ever have the tools to know that we’ve reached it. In the absence of those tools, a lot of what we’re doing when saying “there’s more suffering here” or “there’s more suffering there” might be or might be compared to a discussion of morality.) We’re also using a bunch of intuitions that were shaped by factors that don’t necessarily guide us to truths about suffering/sentience (I explore this in the Factors affecting how we attribute suffering section in my recent microorganism post).