I enjoyed reading this post! I like Wittgensteinian arguments, and applying them to ethics, so hurrah for this. There was also some lively discussion of it on the EA corner chat.
Another possible misleading motivation for irreducible normativity may be linguistic. It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.
From an EA perspective, I thought it could be useful to get a sense of the effectiveness of this post (series)? You could, for instance, identify a few philosophy graduate students who hold the position you’re arguing against and compare their credence in the relevant position before and after reading. In my experience, people’s cruxes for disagreement in ethics are all over the place, and you run a risk of missing the arguments which compel those who believe in e.g. irreducible normativity. I very much like Wittgensteinian arguments against motivations and coherence, but for those who subscribe to irreducible normativity I’m not sure they will find these arguments compelling. If this concern actualizes, you might find it useful to first poll people who disagree with you about the position of interest, and then write a post to address the cruxes you have identified.
Edit: At the moment EA forum spam filter is, for some reason, preventing me from replying to @antimonyanthony, so I will reply by edit instead: I think this is quite a subtle point, and as I understand it, there is some ongoing disagreement among philosophers about these issues. Let’s make things clearer by replacing ‘agony’ with ‘bad experience’. A bad experience for a paperclip maximizer is likely to involve difficulty producing paperclips. More generally, which experiences are considered bad is determined by the agent’s nature. However for humans there’s sufficient overlap in our neural nature for their to be self-evident cases of badness, e.g. extreme pain. If someone does not call these self-evident cases bad, then she/he is not using the word bad in its standard sense. There are a lot of complications in this argument cf. Kripke on c-fibers, but I believe the general argument I sketched holds.
It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.
Could you please clarify this? As someone who is mainly convinced of irreducible normativity by the self-evident badness of agony—in particular, considering the intuition that someone in agony has reason to end it even if they don’t consciously “desire” that end—I don’t think this can be dissolved as a linguistic confusion.
It’s true that for all practical purposes humans seem not to desire their own pain/suffering. But in my discussions with some antirealists they have argued that if a paperclip maximizer, for example, doesn’t want not to suffer (by hypothesis all it wants is to maximize paperclips), then such a being doesn’t have a reason to avoid suffering. That to me seems patently unbelievable. Apologies if I’ve misunderstood your point!
I enjoyed reading this post! I like Wittgensteinian arguments, and applying them to ethics, so hurrah for this. There was also some lively discussion of it on the EA corner chat.
Another possible misleading motivation for irreducible normativity may be linguistic. It seems to me plausible that anyone who uses the word agony in the standard sense is committing her/himself to agony being undesirable. This is not an argument for irreducible normativity, but it may give you a feeling that there is some intrinsic connection underlying the set of self-evident cases.
From an EA perspective, I thought it could be useful to get a sense of the effectiveness of this post (series)? You could, for instance, identify a few philosophy graduate students who hold the position you’re arguing against and compare their credence in the relevant position before and after reading. In my experience, people’s cruxes for disagreement in ethics are all over the place, and you run a risk of missing the arguments which compel those who believe in e.g. irreducible normativity. I very much like Wittgensteinian arguments against motivations and coherence, but for those who subscribe to irreducible normativity I’m not sure they will find these arguments compelling. If this concern actualizes, you might find it useful to first poll people who disagree with you about the position of interest, and then write a post to address the cruxes you have identified.
Edit: At the moment EA forum spam filter is, for some reason, preventing me from replying to @antimonyanthony, so I will reply by edit instead: I think this is quite a subtle point, and as I understand it, there is some ongoing disagreement among philosophers about these issues. Let’s make things clearer by replacing ‘agony’ with ‘bad experience’. A bad experience for a paperclip maximizer is likely to involve difficulty producing paperclips. More generally, which experiences are considered bad is determined by the agent’s nature. However for humans there’s sufficient overlap in our neural nature for their to be self-evident cases of badness, e.g. extreme pain. If someone does not call these self-evident cases bad, then she/he is not using the word bad in its standard sense. There are a lot of complications in this argument cf. Kripke on c-fibers, but I believe the general argument I sketched holds.
Could you please clarify this? As someone who is mainly convinced of irreducible normativity by the self-evident badness of agony—in particular, considering the intuition that someone in agony has reason to end it even if they don’t consciously “desire” that end—I don’t think this can be dissolved as a linguistic confusion.
It’s true that for all practical purposes humans seem not to desire their own pain/suffering. But in my discussions with some antirealists they have argued that if a paperclip maximizer, for example, doesn’t want not to suffer (by hypothesis all it wants is to maximize paperclips), then such a being doesn’t have a reason to avoid suffering. That to me seems patently unbelievable. Apologies if I’ve misunderstood your point!