(We seem to be talking past each other in some weird way; I’m not even sure what exactly it is that we’re disagreeing over.)
It would (arguably) give results that people wouldn’t like, but assuming that the moral theory is correct and the machine understands it, almost necessarily it would do morally correct things.
Well sure, if we proceed from the assumption that the moral theory really was correct, but the point was that none of those proposed theories has been generally accepted by moral philosophers.
But that’s an even stronger claim than the one that moral philosophy hasn’t progressed towards such a goal. What reasons are there?
I gave one in the comment? That philosophy has accepted that you can’t give a set of human-comprehensible set of necessary and sufficient criteria for concepts, and if you want a system for classifying concepts you have to use psychology and machine learning; and it looks like morality is similar.
Except the field of ethics does it with actual arguments among experts in the field. You could make the same story for any field: truths about physics can be determined by social consensus, since that’s just what the field of physics is, a physicist presents an experiment or hypothesis, another attacks it, if the hypothesis survives the attacks and is compelling then it is eventually accepted! And so on for all non-moral fields of inquiry as well. I don’t see why you think ethics would be special; basically everything can be modeled like this. But that’s ridiculous. We don’t look at social consensus for all forms of inquiry, because there is a difference between what ordinary people believe and what people believe when they are trained professionals in the subject.
I’m not sure what exactly you’re disagreeing with? It seems obvious to me that physics does indeed proceed by social consensus in the manner you describe. Someone does an experiment, then others replicate the experiment until there is consensus that this experiment really does produce these results; somebody proposes a hypothesis to explain the experimental results, others point out holes in that hypothesis, there’s an extended back-and-forth conversation and further experiments until there is a consensus that the modified hypothesis really does explain the results and that it can be accepted as an established scientific law. And the same for all other scientific and philosophical disciplines. I don’t think that ethics is special in that sense.
Sure, there is a difference between what ordinary people believe and what people believe when they’re trained professionals: that’s why you look for a social consensus among the people who are trained professionals and have considered the topic in detail, not among the general public.
Then why don’t you believe in morality by social consensus? (Or do you? It seems like you’re probably not, given that you’re an effective altruist.
I do believe in morality by social consensus, in the same manner as I believe in physics by social consensus: if I’m told that the physics community has accepted it as an established fact that e=mc^2 and that there’s no dispute or uncertainty about this, then I’ll accept it as something that’s probably true. If I thought that it was particularly important for me to make sure that this was correct, then I might look up the exact reasoning and experiments used to determine this and try to replicate some of them, until I found myself to also be in consensus with the physics community.
Similarly, if someone came to me with a theory of what was moral and it turned out that the entire community of moral philosophers had considered this theory and accepted it after extended examination, and I could also not find any objections to that and found the justifications compelling, then I would probably also accept the moral theory.
But to my knowledge, nobody has presented a conclusive moral theory that would satisfy both me and nearly all moral philosophers and which would say that it was wrong to be an effective altruist—quite the opposite. So I don’t see a problem in being an EA.
Well sure, if we proceed from the assumption that the moral theory really was correct, but the point was that none of those proposed theories has been generally accepted by moral philosophers.
Your point was that “none of the existing ethical theories are up to the task of giving us such a set of principles that, when programmed into an AI, would actually give results that could be considered “good”.” But this claim is simply begging the question by assuming that all the existing theories are false. And to claim that a theory would have bad moral results is different from claiming that it’s not generally accepted by moral philosophers. It’s plausible that a theory would have good moral results, in virtue of it being correct, while not being accepted by many moral philosophers. Since there is no dominant moral theory, this is necessarily the case as long as some moral theory is correct.
I gave one in the comment? That philosophy has accepted that you can’t give a set of human-comprehensible set of necessary and sufficient criteria for concepts
If you’re referring to ethics, no, philosophy has not accepted that you cannot give such an account. You believe this, on the basis of your observation that philosophers give different accounts of ethics. But that doesn’t mean that moral philosophers believe it. They just don’t think that the fact of disagreement implies that no such account can be given.
It seems obvious to me that physics does indeed proceed by social consensus in the manner you describe. Someone does an experiment, then others replicate the experiment until there is consensus that this experiment really does produce these results; somebody proposes a hypothesis to explain the experimental results, others point out holes in that hypothesis, there’s an extended back-and-forth conversation and further experiments until there is a consensus that the modified hypothesis really does explain the results and that it can be accepted as an established scientific law. And the same for all other scientific and philosophical disciplines. I don’t think that ethics is special in that sense.
So you haven’t pointed out any particular features of ethics, you’ve merely described a feature of inquiry in general. This shows that your claim proves too much—it would be ridiculous to conduct physics by studying psychology.
Sure, there is a difference between what ordinary people believe and what people believe when they’re trained professionals: that’s why you look for a social consensus among the people who are trained professionals and have considered the topic in detail, not among the general public.
But that’s not a matter of psychological inquiry, that’s a matter of looking at what is being published in philosophy, becoming familiar with how philosophical arguments are formed, and staying in touch with current developments in the field. So you are basically describing studying philosophy. Studying or researching psychology will not tell you anything about this.
(We seem to be talking past each other in some weird way; I’m not even sure what exactly it is that we’re disagreeing over.)
Well sure, if we proceed from the assumption that the moral theory really was correct, but the point was that none of those proposed theories has been generally accepted by moral philosophers.
I gave one in the comment? That philosophy has accepted that you can’t give a set of human-comprehensible set of necessary and sufficient criteria for concepts, and if you want a system for classifying concepts you have to use psychology and machine learning; and it looks like morality is similar.
I’m not sure what exactly you’re disagreeing with? It seems obvious to me that physics does indeed proceed by social consensus in the manner you describe. Someone does an experiment, then others replicate the experiment until there is consensus that this experiment really does produce these results; somebody proposes a hypothesis to explain the experimental results, others point out holes in that hypothesis, there’s an extended back-and-forth conversation and further experiments until there is a consensus that the modified hypothesis really does explain the results and that it can be accepted as an established scientific law. And the same for all other scientific and philosophical disciplines. I don’t think that ethics is special in that sense.
Sure, there is a difference between what ordinary people believe and what people believe when they’re trained professionals: that’s why you look for a social consensus among the people who are trained professionals and have considered the topic in detail, not among the general public.
I do believe in morality by social consensus, in the same manner as I believe in physics by social consensus: if I’m told that the physics community has accepted it as an established fact that e=mc^2 and that there’s no dispute or uncertainty about this, then I’ll accept it as something that’s probably true. If I thought that it was particularly important for me to make sure that this was correct, then I might look up the exact reasoning and experiments used to determine this and try to replicate some of them, until I found myself to also be in consensus with the physics community.
Similarly, if someone came to me with a theory of what was moral and it turned out that the entire community of moral philosophers had considered this theory and accepted it after extended examination, and I could also not find any objections to that and found the justifications compelling, then I would probably also accept the moral theory.
But to my knowledge, nobody has presented a conclusive moral theory that would satisfy both me and nearly all moral philosophers and which would say that it was wrong to be an effective altruist—quite the opposite. So I don’t see a problem in being an EA.
Your point was that “none of the existing ethical theories are up to the task of giving us such a set of principles that, when programmed into an AI, would actually give results that could be considered “good”.” But this claim is simply begging the question by assuming that all the existing theories are false. And to claim that a theory would have bad moral results is different from claiming that it’s not generally accepted by moral philosophers. It’s plausible that a theory would have good moral results, in virtue of it being correct, while not being accepted by many moral philosophers. Since there is no dominant moral theory, this is necessarily the case as long as some moral theory is correct.
If you’re referring to ethics, no, philosophy has not accepted that you cannot give such an account. You believe this, on the basis of your observation that philosophers give different accounts of ethics. But that doesn’t mean that moral philosophers believe it. They just don’t think that the fact of disagreement implies that no such account can be given.
So you haven’t pointed out any particular features of ethics, you’ve merely described a feature of inquiry in general. This shows that your claim proves too much—it would be ridiculous to conduct physics by studying psychology.
But that’s not a matter of psychological inquiry, that’s a matter of looking at what is being published in philosophy, becoming familiar with how philosophical arguments are formed, and staying in touch with current developments in the field. So you are basically describing studying philosophy. Studying or researching psychology will not tell you anything about this.