There is no one opposite way; there are many other ways than to fix human value. You could fix the value in fruit flies, shrimps, chickens, elephants, C elegans, some plant, some bacterium, rocks, your laptop, GPT-4 or an alien, etc..
I think a more principled approach would be to consider precise theories of how welfare scales, not necessarily fixing the value in any one moral patient, and then use some other approach to moral uncertainty for uncertainty between the theories. However, there is another argument for fixing human value across many such theories: we directly value our own experiences, and theorize about consciousness in relation to our own experiences, so we can fix the value in our own experiences and evaluate relative to them.
There is no one opposite way; there are many other ways than to fix human value. You could fix the value in fruit flies, shrimps, chickens, elephants, C elegans, some plant, some bacterium, rocks, your laptop, GPT-4 or an alien, etc..
I think a more principled approach would be to consider precise theories of how welfare scales, not necessarily fixing the value in any one moral patient, and then use some other approach to moral uncertainty for uncertainty between the theories. However, there is another argument for fixing human value across many such theories: we directly value our own experiences, and theorize about consciousness in relation to our own experiences, so we can fix the value in our own experiences and evaluate relative to them.