To me this sort of extrapolation seems like a “reductio ad absurdum” that demonstrates that suffering is not the correct metric to minimize.
Here’s a thought experiment. Let’s say that all sentient beings were converted to algorithms, and suffering was a single number stored in memory. Various actions are chosen to minimize suffering. Now, let’s say you replaced everyone’s algorithm with a new one. In the new algorithm, whenever you would previously get suffering=x, you instead get suffering=x/2.
The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.
Have you done a great thing for the world, or is it a meaningless change of units?
If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.
I’m saying the amount of suffering is not just the output of some algorithm or something written in memory. I would define it functionally/behaviourally, if at all, although possibly at the level of internal behaviour, not external behaviour. But it would be more complex than your hypothesis makes it out to be.
The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.
This probably doesn’t apply to Pearce’s qualia realist view, but it’s possible to have a functionalist notion of suffering where eliminating suffering would change people’s behavior.
For instance, I think of suffering as an experienced need to change something about one’s current experience, something that by definition carries urgency to bring about change. If you get rid of that, it has behavioral consequences. If a person experiences pain asymbolia where they don’t consider their “pain” bothersome in any way, I would no longer call it suffering.
To me this sort of extrapolation seems like a “reductio ad absurdum” that demonstrates that suffering is not the correct metric to minimize.
Here’s a thought experiment. Let’s say that all sentient beings were converted to algorithms, and suffering was a single number stored in memory. Various actions are chosen to minimize suffering. Now, let’s say you replaced everyone’s algorithm with a new one. In the new algorithm, whenever you would previously get suffering=x, you instead get suffering=x/2.
The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.
Have you done a great thing for the world, or is it a meaningless change of units?
I think it’s extraordinarily unlikely suffering could just be this. Some discussion here.
If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.
I’m saying the amount of suffering is not just the output of some algorithm or something written in memory. I would define it functionally/behaviourally, if at all, although possibly at the level of internal behaviour, not external behaviour. But it would be more complex than your hypothesis makes it out to be.
This probably doesn’t apply to Pearce’s qualia realist view, but it’s possible to have a functionalist notion of suffering where eliminating suffering would change people’s behavior.
For instance, I think of suffering as an experienced need to change something about one’s current experience, something that by definition carries urgency to bring about change. If you get rid of that, it has behavioral consequences. If a person experiences pain asymbolia where they don’t consider their “pain” bothersome in any way, I would no longer call it suffering.