My worry is the idea we can round this problem by evaluating the arguments ourselves. We’re not special. Academics just evaluate the arguments, like we would, but understand them better. The only way i can see myself being justified in rejecting their views is by showing they’re biased. So maybe my point wasn’t “the academics are right, so narrow consequentialism is wrong” but “most people who know much more about this than us don’t think narrow consequentialism is right, so we don’t know its right”.
That’s a reasonable worry, but whereas the field of ethics as a whole is concerned I would be much more worried about trusting the judgment of the average ethicist over ours.
I would also agree that the “we are not special”-assumption seems like a reasonable best-guess for how things are in the absence of evidence for or against (although, in fear of violating your not-comming-across-as-smug-and-arrogant-reccomendation, I’m genuinely unsure about whether its correct or not).
I’ve also thought a lot about ethics, I’ve been doing so since childhood. But admittedly, most of the philosophical texts that have been written about these topics have not been read by me (or by most professional ethicists I suppose, but I’ve read far less than them also, for sure). I have read a significant amount though, enough for me to have heard most or all memorable arguments I’ve heard be repeated several times. Also, perhaps more surprisingly; I’m somewhat confident that I’ve never heard an argument against my opinions about ethics (that is, not the specific issues, but the abstract issues) that was both (1) not based on axiomatic assumptions/intuitions I disagree with and (2) something I hadn’t already thought of (of course, I may have forgotten, but it also seems like something that would have been memorable). Examples where criteria #2 was met but #1 wasn’t met includes things like e.g. “the repugnant conclusion” (it doesn’t seem repugnant to me at all, so it never occurred to me that this should be seen as a possible counter argument). Philosophy class was a lot of “oh.. so that argument has a name” (and also a lot of “what? do people find that a convincing argument against utilitarianism?”).
For what I know this could be the experience of many with opinions different from mine also, but if so, it suggests that intuitions and/or base assumptions may be the determining factor for many, as opposed to knowledge and understanding of arguments presented by differing sides. My suspicion is that the main contributor for the current “stale-mate” in philosophical debates is that people have different intuitions and commitments. Some ethicists realize that utilitarianism in some circumstances would require us to prioritise other children to the extent that we let our own children starve, and say “reductio absurdism”. I realize the same thing, and say “yes, of course” (and if I don’t act by that, it’s because I have other urges and commitments beyond doing what I think is best, not because I think that I don’t think doing so could be the best thing from a non-partial point of view).
My best guess would be that most ethicists don’t understand the arguments surrounding my views better than I do, but that they know a lot more than I do about views that are based on assumptions I don’t agree with or am unconfident about (and about specific non-abstract issues they work with). But I’m not a 100% sure about this, and it would be interesting to test.
In the short story Three worlds collide one of the species the space-travelers meet evolved to see the eating of children as a terminal value. This doesn’t seem to me like something that’s necessarily is implausible (after all, evolution doesn’t pass the ethical intuitions it gives us through an ethics review board). I can absolutely imagine alien ethicists viewing hedonistic utilitarianism as a reductio absurdum because it doesn’t allow for the eating of conscious children.
While we have turned out much better than the hypothetical baby-eating aliens, I don’t think its a ridiculous example to bring up. I once talked on Facebook with a person taking a PHD in ethics who disagreed that we should care about the suffering about wildlife animals (my impression was that I was rounding him into a corner where he would have to either change previously stated positions or admit that he didn’t fully believe in logic, but at some point he didn’t continue the discussion). And you’ll find ethicists who see punishment against wrongdoers as a terminal value (I obviously see the use of punishment as an instrumental value).
A reasonable question to ask of me would be; so if you think peoples ethical intuitions are unreliable, isn’t that also true of yourself?
Well, that’s the thing. The views that I’m confident in are the ones that aren’t based on core ethical intuitions (although they overlap with my ethical intuitions), but can be deduced from things that aren’t ethical intuitions, as well as principles such as logical consistency and impartiality (I know I’m being unspecif here, and can extend on this if anyone wants me to). I could have reasoned myself to these views also if I was a complete psychopath. And the views I’m most confident in are the ones that don’t even rely on my beliefs about what I want for myself (that is, I’m much more sure about the conscious experience I have if tortured being inherently bad than I am about e.g. whether it inherently matters if my beliefs about reality correspond with reality). My impression is that this commitment to being sceptical of ethical intuitions in this way is something that isn’t shared among all (or even the majority?) of ethicists.
Anyway, I think it would be stupid of me to go on a lot longer since this is a comment and not something that will be read by a lot of people, but I felt an urge to give at least some account of why I think like I do. To summarize: I’m not so sure that the average ethicist understands the relevant arguments better than the EAs who have reflected the most about this, and would be very unsurprised if the opposite was the case. And I think ethicists having other opinions than ‘narrow consequentialism’ is more about them having a commitment to other ethical intuitions, and lacking some of the commitments to “impartiality” that I suspect narrow consequensialists often have, as opposed to them having arguments that narrow consequensialist EAs haven’t considered or don’t understand. But I’m really not sure about this—if people think I’m wrong I’m interested in hearing about it, and looking more into this is definitely on my todo-list.
It would be interesting if comprehensive studies were done, or tools were made, in order to identify what differences of opinion are caused by, to which degree philosophers belonging to one branch of ethical theory are logically consistent and to which degree they understand the arguments of other branches, etc. Debates about these kinds of things can often be frustrating and inefficient, so I hope that we in the future will be able to make progress.
My basic worries are:
-Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they’re wrong.
-Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you’re right, why you think your intuitions are more reliable than theirs?
The views that I’m confident in are the ones that aren’t based on core ethical intuitions (although they overlap with my ethical intuitions), but can be deduced from things that aren’t ethical intuitions, as well as principles such as logical consistency and impartiality… I can extend on this if anyone wants me to
Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they’re wrong.
Btw, I agree with this in the sense that I’d rather have a random ethicist make decisions about an ethical question than a random person.
I’d definitely be interested to hear more :)
Great! I’m writing a text about this, and I’ll add a comment with a reference to it when the first-draft finished :)
Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you’re right, why you think your intuitions are more reliable than theirs?
A reasonable question, and I’ll try to give a better account of my reasons for this in my next comment, since the text may help in giving a picture of where I’m coming from. I will say in my defence though, that I do have at least some epistemic modesty in regards to this—although not as much as I think you would think is the reasonable level. While what I think of as probably being the best outcomes from an “objective” perspective corresponds to some sort of hedonistic utilitarianism, I do not and do not intend to ever work towards outcomes that don’t also take other ethical concerns into account, and hope to achieve a future that that is very good from the perspective of many ethical viewpoints (rights of persons, fairness, etc) - partly because of epistemic modesty.
Likewise :)
That’s a reasonable worry, but whereas the field of ethics as a whole is concerned I would be much more worried about trusting the judgment of the average ethicist over ours.
I would also agree that the “we are not special”-assumption seems like a reasonable best-guess for how things are in the absence of evidence for or against (although, in fear of violating your not-comming-across-as-smug-and-arrogant-reccomendation, I’m genuinely unsure about whether its correct or not).
I’ve also thought a lot about ethics, I’ve been doing so since childhood. But admittedly, most of the philosophical texts that have been written about these topics have not been read by me (or by most professional ethicists I suppose, but I’ve read far less than them also, for sure). I have read a significant amount though, enough for me to have heard most or all memorable arguments I’ve heard be repeated several times. Also, perhaps more surprisingly; I’m somewhat confident that I’ve never heard an argument against my opinions about ethics (that is, not the specific issues, but the abstract issues) that was both (1) not based on axiomatic assumptions/intuitions I disagree with and (2) something I hadn’t already thought of (of course, I may have forgotten, but it also seems like something that would have been memorable). Examples where criteria #2 was met but #1 wasn’t met includes things like e.g. “the repugnant conclusion” (it doesn’t seem repugnant to me at all, so it never occurred to me that this should be seen as a possible counter argument). Philosophy class was a lot of “oh.. so that argument has a name” (and also a lot of “what? do people find that a convincing argument against utilitarianism?”).
For what I know this could be the experience of many with opinions different from mine also, but if so, it suggests that intuitions and/or base assumptions may be the determining factor for many, as opposed to knowledge and understanding of arguments presented by differing sides. My suspicion is that the main contributor for the current “stale-mate” in philosophical debates is that people have different intuitions and commitments. Some ethicists realize that utilitarianism in some circumstances would require us to prioritise other children to the extent that we let our own children starve, and say “reductio absurdism”. I realize the same thing, and say “yes, of course” (and if I don’t act by that, it’s because I have other urges and commitments beyond doing what I think is best, not because I think that I don’t think doing so could be the best thing from a non-partial point of view).
My best guess would be that most ethicists don’t understand the arguments surrounding my views better than I do, but that they know a lot more than I do about views that are based on assumptions I don’t agree with or am unconfident about (and about specific non-abstract issues they work with). But I’m not a 100% sure about this, and it would be interesting to test.
In the short story Three worlds collide one of the species the space-travelers meet evolved to see the eating of children as a terminal value. This doesn’t seem to me like something that’s necessarily is implausible (after all, evolution doesn’t pass the ethical intuitions it gives us through an ethics review board). I can absolutely imagine alien ethicists viewing hedonistic utilitarianism as a reductio absurdum because it doesn’t allow for the eating of conscious children.
While we have turned out much better than the hypothetical baby-eating aliens, I don’t think its a ridiculous example to bring up. I once talked on Facebook with a person taking a PHD in ethics who disagreed that we should care about the suffering about wildlife animals (my impression was that I was rounding him into a corner where he would have to either change previously stated positions or admit that he didn’t fully believe in logic, but at some point he didn’t continue the discussion). And you’ll find ethicists who see punishment against wrongdoers as a terminal value (I obviously see the use of punishment as an instrumental value).
A reasonable question to ask of me would be; so if you think peoples ethical intuitions are unreliable, isn’t that also true of yourself?
Well, that’s the thing. The views that I’m confident in are the ones that aren’t based on core ethical intuitions (although they overlap with my ethical intuitions), but can be deduced from things that aren’t ethical intuitions, as well as principles such as logical consistency and impartiality (I know I’m being unspecif here, and can extend on this if anyone wants me to). I could have reasoned myself to these views also if I was a complete psychopath. And the views I’m most confident in are the ones that don’t even rely on my beliefs about what I want for myself (that is, I’m much more sure about the conscious experience I have if tortured being inherently bad than I am about e.g. whether it inherently matters if my beliefs about reality correspond with reality). My impression is that this commitment to being sceptical of ethical intuitions in this way is something that isn’t shared among all (or even the majority?) of ethicists.
Anyway, I think it would be stupid of me to go on a lot longer since this is a comment and not something that will be read by a lot of people, but I felt an urge to give at least some account of why I think like I do. To summarize: I’m not so sure that the average ethicist understands the relevant arguments better than the EAs who have reflected the most about this, and would be very unsurprised if the opposite was the case. And I think ethicists having other opinions than ‘narrow consequentialism’ is more about them having a commitment to other ethical intuitions, and lacking some of the commitments to “impartiality” that I suspect narrow consequensialists often have, as opposed to them having arguments that narrow consequensialist EAs haven’t considered or don’t understand. But I’m really not sure about this—if people think I’m wrong I’m interested in hearing about it, and looking more into this is definitely on my todo-list.
It would be interesting if comprehensive studies were done, or tools were made, in order to identify what differences of opinion are caused by, to which degree philosophers belonging to one branch of ethical theory are logically consistent and to which degree they understand the arguments of other branches, etc. Debates about these kinds of things can often be frustrating and inefficient, so I hope that we in the future will be able to make progress.
Thanks for that.
My basic worries are: -Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they’re wrong.
-Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you’re right, why you think your intuitions are more reliable than theirs?
I’d definitely be interested to hear more :)
Btw, I agree with this in the sense that I’d rather have a random ethicist make decisions about an ethical question than a random person.
Great! I’m writing a text about this, and I’ll add a comment with a reference to it when the first-draft finished :)
A reasonable question, and I’ll try to give a better account of my reasons for this in my next comment, since the text may help in giving a picture of where I’m coming from. I will say in my defence though, that I do have at least some epistemic modesty in regards to this—although not as much as I think you would think is the reasonable level. While what I think of as probably being the best outcomes from an “objective” perspective corresponds to some sort of hedonistic utilitarianism, I do not and do not intend to ever work towards outcomes that don’t also take other ethical concerns into account, and hope to achieve a future that that is very good from the perspective of many ethical viewpoints (rights of persons, fairness, etc) - partly because of epistemic modesty.