Thanks, I was referring to this as well, but should have had a second link for it as the Rethink page on neuron counts didn’t link to the other post. I think that page is a better link than the RP page I linked, so I’ll add it in my comment.
(Again, not speaking on behalf of Rethink Priorities, and I don’t work there anymore.)
(Btw, the quote formatting in your original comment got messed up with your edit.)
I think the claims I quoted are still basically false, though?
Rethink’s work, as I read it, did not address that central issue, that you get wildly different results from assuming the moral value of a fruit fly is fixed and reporting possible ratios to elephant welfare as opposed to doing it the other way around.
There’s a case that conscious subsystems could dominate expected welfare ranges even without intertheoretic comparisons (but also possibly with), so I think we were focusing on one of strongest and most important arguments for humans potentially mattering more, assuming hedonism and expectational total utilitarianism. Maximizing expected choiceworthiness with intertheoretic comparisons is controversial and only one of multiple competing approaches to moral uncertainty. I’m personally very skeptical of it because of the arbitrariness of intertheoretic comparisons and its fanaticism (including chasing infinities, and lexically higher and higher infinities). Open Phil also already avoids making intertheoretic comparisons, but was more sympathetic to normalizing by humans if it were going to.
I don’t want to convey that there was no discussion, thus my linking the discussion and saying I found it inadequate and largely missing the point from my perspective. I made an edit for clarity, but would accept suggestions for another.
Thanks, I was referring to this as well, but should have had a second link for it as the Rethink page on neuron counts didn’t link to the other post. I think that page is a better link than the RP page I linked, so I’ll add it in my comment.
(Again, not speaking on behalf of Rethink Priorities, and I don’t work there anymore.)
(Btw, the quote formatting in your original comment got messed up with your edit.)
I think the claims I quoted are still basically false, though?
Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently? explicitly considered a conscious subsystems version of this thought experiment, focusing on the more human-favouring side when you normalize by small systems like insect brains, which is the non-obvious side often neglected.
There’s a case that conscious subsystems could dominate expected welfare ranges even without intertheoretic comparisons (but also possibly with), so I think we were focusing on one of strongest and most important arguments for humans potentially mattering more, assuming hedonism and expectational total utilitarianism. Maximizing expected choiceworthiness with intertheoretic comparisons is controversial and only one of multiple competing approaches to moral uncertainty. I’m personally very skeptical of it because of the arbitrariness of intertheoretic comparisons and its fanaticism (including chasing infinities, and lexically higher and higher infinities). Open Phil also already avoids making intertheoretic comparisons, but was more sympathetic to normalizing by humans if it were going to.
I don’t want to convey that there was no discussion, thus my linking the discussion and saying I found it inadequate and largely missing the point from my perspective. I made an edit for clarity, but would accept suggestions for another.
Your edit looks good to me. Thanks!