A Pascal’s mugging is an intentional move by an actor where they are probably deceiving you. The fact that something has a low probability of huge payoff doesn’t make it a mugging and doesn’t imply that we should ignore it.
I don’t think moral uncertainty is a real problem. The slope isn’t uneven; I just believe that suffering is worse than most EAs do, but it would still be a straight line. I also do not support creating new happy beings instead of helping those who already exist and are suffering.
I don’t think “moral uncertainty” is something that can be solved, or even a legitimate meta-ethical problem. You can’t compare how bad something is across multiple ethical theories. Is 1 violation of rights = 1 utilon? There’s also the possibility that the correct ethical theory hasn’t even been discovered yet, and we don’t have any idea what it would say.
Cool. Interestingly, twice you’ve surprised me by endorsing a position that I thought you were repudiating. A straight line in terms of experience and value is exactly what I think of by symmetric utilitarianism, just as puzzling over this question is just what I imagine by thinking moral uncertainty is a problem. The idea that the correct ethical theory hasn’t been discovered yet, if there is such a thing, seems to be the most important source of uncertainty of all to me, though it is rarely discussed.
I believe in a symmetry for people who already exist, but I also think empirically that many common sources of suffering are far worse than the common sources of happiness are good. For people who don’t exist, I don’t see how creating more happy people is good. The absence of happiness is not bad. This is where I think there is an asymmetry.
I don’t even understand what it would mean for an ethical theory to be correct. Does that mean it is hardwired into the physical constants of the universe? I guess I’m sort of a non-cognitivist.
Right, but is that for sources of happiness and suffering that are common among all people who will exist across all time? Because almost all of the people who will exist (irrespective of your actions) don’t currently.
There’s a difficulty that I guess you’d be sensitive to, in that it’s hard to distinguish the absence of happiness from the presence of suffering and vice versa. The difference between the two is not hardwired into the physical constants of the universe, if that is a phrasing that you might be sympathetic to, though no snark is intended.
If you’re non-cognitivist, then you could ask whether you “should” (even rationally or egoistically) act according to your moral perspective. If you choose to live out your values by some description, for some reason, then they’re not going to be purely represented by any ethical theory anyway, and it’d be unclear to me why you’d want to simplify your intuitions in that way.
If you don’t have a child, you are not decreasing your nonexistent offspring’s welfare/preference satisfaction. Beings who do not exist do not have preferences and cannot suffer. Once they exist (and become sentient), their preferences and welfare matter. This may not be hardcoded into the universe, but it’s not hard to distinguish between having a child and not having one.
I meant within one person. If you believe that there is a fundamental difference between intrapersonal and interpersonal comparisons, then you’re going to run into a wall trying to define persons… It doesn’t seem to me that this really checks out, putting aside the question of why one would want simple answers here as a non-cognitivist.
This seems like a textbook case of a Pascal’s mugging.
I would describe my ethical view as negative-leaning (or perhaps asymmetric)), but still broadly utilitarian.
A Pascal’s mugging is an intentional move by an actor where they are probably deceiving you. The fact that something has a low probability of huge payoff doesn’t make it a mugging and doesn’t imply that we should ignore it.
How do you involve moral uncertainty or moral pluralism?
How do you set the scale for happiness and suffering on which moral value is supposed to slope unevenly? (1)
http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/images/graph.png
I don’t think moral uncertainty is a real problem. The slope isn’t uneven; I just believe that suffering is worse than most EAs do, but it would still be a straight line. I also do not support creating new happy beings instead of helping those who already exist and are suffering.
Are you completely certain that you should act according to your moral perspective?
I don’t think “moral uncertainty” is something that can be solved, or even a legitimate meta-ethical problem. You can’t compare how bad something is across multiple ethical theories. Is 1 violation of rights = 1 utilon? There’s also the possibility that the correct ethical theory hasn’t even been discovered yet, and we don’t have any idea what it would say.
Cool. Interestingly, twice you’ve surprised me by endorsing a position that I thought you were repudiating. A straight line in terms of experience and value is exactly what I think of by symmetric utilitarianism, just as puzzling over this question is just what I imagine by thinking moral uncertainty is a problem. The idea that the correct ethical theory hasn’t been discovered yet, if there is such a thing, seems to be the most important source of uncertainty of all to me, though it is rarely discussed.
I believe in a symmetry for people who already exist, but I also think empirically that many common sources of suffering are far worse than the common sources of happiness are good. For people who don’t exist, I don’t see how creating more happy people is good. The absence of happiness is not bad. This is where I think there is an asymmetry.
I don’t even understand what it would mean for an ethical theory to be correct. Does that mean it is hardwired into the physical constants of the universe? I guess I’m sort of a non-cognitivist.
Right, but is that for sources of happiness and suffering that are common among all people who will exist across all time? Because almost all of the people who will exist (irrespective of your actions) don’t currently.
There’s a difficulty that I guess you’d be sensitive to, in that it’s hard to distinguish the absence of happiness from the presence of suffering and vice versa. The difference between the two is not hardwired into the physical constants of the universe, if that is a phrasing that you might be sympathetic to, though no snark is intended.
If you’re non-cognitivist, then you could ask whether you “should” (even rationally or egoistically) act according to your moral perspective. If you choose to live out your values by some description, for some reason, then they’re not going to be purely represented by any ethical theory anyway, and it’d be unclear to me why you’d want to simplify your intuitions in that way.
If you don’t have a child, you are not decreasing your nonexistent offspring’s welfare/preference satisfaction. Beings who do not exist do not have preferences and cannot suffer. Once they exist (and become sentient), their preferences and welfare matter. This may not be hardcoded into the universe, but it’s not hard to distinguish between having a child and not having one.
I meant within one person. If you believe that there is a fundamental difference between intrapersonal and interpersonal comparisons, then you’re going to run into a wall trying to define persons… It doesn’t seem to me that this really checks out, putting aside the question of why one would want simple answers here as a non-cognitivist.