3) “Practical issues” with utilitarianism vs. “ontological” concerns with value
I can make sense of the notion of something like “a community of rational agents” or “sentient beings”, and I can see why I value principles coming from this notion; but I’m not sure what a POVU can mean. This is not an issue about abstraction per se. (I’m sorry, this is gonna be even more confusing than the previous comments, but I believe this very discussion is entangled in too many things, not just my thoughts.)
First, you have some issues concerning decision theory: I don’t know what sort of agent, preferences and judgments figure in the POVU; also, if the universe is infinite, the POVU may result in nihilistic infinite ethics. There are many proposals to avoid these obstacles, though.
I think the overall issue is that, even if you can make sense of POVU, it’s underspecified – and then you have to choose a more “normal” POV to make sense of it (the “abstract communities” I quoted above).
To see how this is different from “practical concerns”, take Singer’s mom example: I can totally understand that he spends more resources on his mother than on starving kids. On the other hand, I could also understand if he acted as a hardcore utilitarian. I’d find it a bit alien, but still rational and certainly not plain wrong; the same if you told me that someone else, in a different society far away from here, 500 years into the past or the future, had let their elders die to save strangers.
Now let’s do some sci-fi: I’d act very differently if you told me that a society had built a Super AI, the God Emoji, to turn their cosmic endowment into something like the “minimal hedonic unit”—see this SMBC strip. Or, to draw from another SMBC strip, if a society had decided to vanish from the Earth to get into a hedonic simulation. I think this would be a tragedy and a waste. (And that Aaron should declare SMBC comics hors concours for the EA Forum creative prize.) However, I’m not sure the world in My little pony: Friendship is optimal, or the hedonist aliens in Three worlds collide, would be equally a waste—even though I don’t want any of that for our descendants.
But I don’t think even these examples picture something like “the POV of the universe”; I think they try to capture a conception of what the POV of sentient life, or the POV of all rational beings, could be… But these notions are more “parochial” than philosophers usually admit—they still focus on a community of beings doing the evaluation. If that’s the case, though, you could think about some hard constraints on your population axiology – concerning the “minimal status” of the members of the community I (or any other agent in our decision problem) want to belong to. In some sense, the sci-fi examples above are “wrong” to me: I can be in no “community” with the “pleasure structures” of the God Emoji; and I don’t think the “community” I’d form with the hedonist aliens would be optimal.
Maybe I’m being biased… but it’s hard for me to avoid something like that when I think about what policies and values I’d want for the longterm future (I guess that’s why we would need some sort of Long Reflexion). I want our descendants to be very different from me, even in ways I’d find strange, just like Aristotle would likely find my values strange… and yet I think of myself (and them) as sharing a path with him, and I believe he could see it this way, too. So I believe Scheffler has a point here: it’s still me doing a good deal of the valuing. I think it’s way less conservative than what he thinks, though.
3) “Practical issues” with utilitarianism vs. “ontological” concerns with value
I can make sense of the notion of something like “a community of rational agents” or “sentient beings”, and I can see why I value principles coming from this notion; but I’m not sure what a POVU can mean. This is not an issue about abstraction per se. (I’m sorry, this is gonna be even more confusing than the previous comments, but I believe this very discussion is entangled in too many things, not just my thoughts.)
First, you have some issues concerning decision theory: I don’t know what sort of agent, preferences and judgments figure in the POVU; also, if the universe is infinite, the POVU may result in nihilistic infinite ethics. There are many proposals to avoid these obstacles, though.
I think the overall issue is that, even if you can make sense of POVU, it’s underspecified – and then you have to choose a more “normal” POV to make sense of it (the “abstract communities” I quoted above).
To see how this is different from “practical concerns”, take Singer’s mom example: I can totally understand that he spends more resources on his mother than on starving kids. On the other hand, I could also understand if he acted as a hardcore utilitarian. I’d find it a bit alien, but still rational and certainly not plain wrong; the same if you told me that someone else, in a different society far away from here, 500 years into the past or the future, had let their elders die to save strangers.
Now let’s do some sci-fi: I’d act very differently if you told me that a society had built a Super AI, the God Emoji, to turn their cosmic endowment into something like the “minimal hedonic unit”—see this SMBC strip. Or, to draw from another SMBC strip, if a society had decided to vanish from the Earth to get into a hedonic simulation. I think this would be a tragedy and a waste. (And that Aaron should declare SMBC comics hors concours for the EA Forum creative prize.) However, I’m not sure the world in My little pony: Friendship is optimal, or the hedonist aliens in Three worlds collide, would be equally a waste—even though I don’t want any of that for our descendants.
But I don’t think even these examples picture something like “the POV of the universe”; I think they try to capture a conception of what the POV of sentient life, or the POV of all rational beings, could be… But these notions are more “parochial” than philosophers usually admit—they still focus on a community of beings doing the evaluation. If that’s the case, though, you could think about some hard constraints on your population axiology – concerning the “minimal status” of the members of the community I (or any other agent in our decision problem) want to belong to. In some sense, the sci-fi examples above are “wrong” to me: I can be in no “community” with the “pleasure structures” of the God Emoji; and I don’t think the “community” I’d form with the hedonist aliens would be optimal.
Maybe I’m being biased… but it’s hard for me to avoid something like that when I think about what policies and values I’d want for the longterm future (I guess that’s why we would need some sort of Long Reflexion). I want our descendants to be very different from me, even in ways I’d find strange, just like Aristotle would likely find my values strange… and yet I think of myself (and them) as sharing a path with him, and I believe he could see it this way, too. So I believe Scheffler has a point here: it’s still me doing a good deal of the valuing. I think it’s way less conservative than what he thinks, though.