Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.
Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics I know is equating ethics with preferences. It seems that there is such a thing as intelligent agents with preferences (although we have no satisfactory mathematical definition yet). Of course each agent has its own preferences and the space of possible preferences is quite big (orthogonality thesis). Hence ethical subjectivism. Human preferences don’t seem to differ much from human to human once you take into account that much of the differences in instrumental goals are explained by different beliefs rather than different terminal goals (=preferences). Therefore it makes sense in certain situations to use approximate models of ethics that don’t explicitly mention the reference human, like utilitarianism. On the other hand, there is no reason the precise ethics should have a simple description (complexity of value). It is a philosophical error to think ethics should be low complexity like physical law since ethics (=preferences) is a property of the agent and has quite a bit of complexity put in by evolution. In other words, ethics is in the same category as the shape of Africa rather than Einstein’s equations. Taking simplified models which take only one value into account (e.g. pleasure) to the extreme is bound to lead to abhorrent conclusions as all other values as sacrificed.
Happiness and suffering in the utilitarian sense are both extraordinarily complicated concepts and encompass a lot of different experiences. They’re shorthand for “things conscious beings experience that are good/bad.”
Meta-ethically I don’t disagree with you that much.
This strikes me as a strange choice of words since e.g. I think it is good to occasionally experience sadness. But arguing over words is not very fruitful.
I’m not sure this interpretation is consistent with “filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible.”
First, “pure happiness” sounds like a raw pleasure signal rather than “things conscious beings experience that are good” but ok, maybe it’s just about wording.
Second, “specifically geared” sounds like wireheading. That is, it sounds like these beings would be happy even if they witnessed the holocaust which again contradicts my understanding of “things conscious beings experience that are good.” However I guess it’s possible to read it charitably (from my perspective) as minds that have superior ability to have truly valuable experiences i.e. some kind of post-humans.
Third, “tiny beings” sounds like some kind of primitive minds rather than superhuman minds as I would expect. But maybe you actually mean physical size in which case I might agree: it seems much more efficient to do something like running lots of post-humans on computronium than allocating for each the material resources of a modern biological human (although at the moment I have no idea what volume of computronium is optimal for running a single post-human: on the one hand, running a modern-like human is probably possible in a very small volume, on the other hand a post-human might be much more computationally expensive).
So, for a sufficiently charitable (from my perspective) reading I agree, but I’m not sure to which extent this reading is aligned with your actual intentions.
Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.
Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics I know is equating ethics with preferences. It seems that there is such a thing as intelligent agents with preferences (although we have no satisfactory mathematical definition yet). Of course each agent has its own preferences and the space of possible preferences is quite big (orthogonality thesis). Hence ethical subjectivism. Human preferences don’t seem to differ much from human to human once you take into account that much of the differences in instrumental goals are explained by different beliefs rather than different terminal goals (=preferences). Therefore it makes sense in certain situations to use approximate models of ethics that don’t explicitly mention the reference human, like utilitarianism. On the other hand, there is no reason the precise ethics should have a simple description (complexity of value). It is a philosophical error to think ethics should be low complexity like physical law since ethics (=preferences) is a property of the agent and has quite a bit of complexity put in by evolution. In other words, ethics is in the same category as the shape of Africa rather than Einstein’s equations. Taking simplified models which take only one value into account (e.g. pleasure) to the extreme is bound to lead to abhorrent conclusions as all other values as sacrificed.
Happiness and suffering in the utilitarian sense are both extraordinarily complicated concepts and encompass a lot of different experiences. They’re shorthand for “things conscious beings experience that are good/bad.”
Meta-ethically I don’t disagree with you that much.
This strikes me as a strange choice of words since e.g. I think it is good to occasionally experience sadness. But arguing over words is not very fruitful.
I’m not sure this interpretation is consistent with “filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible.”
First, “pure happiness” sounds like a raw pleasure signal rather than “things conscious beings experience that are good” but ok, maybe it’s just about wording.
Second, “specifically geared” sounds like wireheading. That is, it sounds like these beings would be happy even if they witnessed the holocaust which again contradicts my understanding of “things conscious beings experience that are good.” However I guess it’s possible to read it charitably (from my perspective) as minds that have superior ability to have truly valuable experiences i.e. some kind of post-humans.
Third, “tiny beings” sounds like some kind of primitive minds rather than superhuman minds as I would expect. But maybe you actually mean physical size in which case I might agree: it seems much more efficient to do something like running lots of post-humans on computronium than allocating for each the material resources of a modern biological human (although at the moment I have no idea what volume of computronium is optimal for running a single post-human: on the one hand, running a modern-like human is probably possible in a very small volume, on the other hand a post-human might be much more computationally expensive).
So, for a sufficiently charitable (from my perspective) reading I agree, but I’m not sure to which extent this reading is aligned with your actual intentions.