Quick thoughts. The goal of effective altruism ought to be based on something more precise than the good of others defined as “well-being” because nothing is intrinsically or non-instrumentally good for a sentient entity when qualia depend on each other for having any value/meaning. As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.
I’m not quite sure I understand what you mean. My experiences have no value unless there is another experiencer in the world? If I’m the last person on Earth and I stub my toe, I think that’s bad because it bad’s for me, that is, it reduces my well-being.
Also, given your concerns, you’ll need to define suffering in a way that is distinct from well-being. If I think suffering is just negative well-being—aka ‘ill-being’ - then your concerns about well-being apply to suffering too.
Also also, if suffering isn’t instrinsically bad, in what sense is it bad?
Finally, I note that all of these concerns are about the value of well-being in a moral theory, which is a distinct question from what this post tackles, which is just what the theories of well-being are. One could (implausibly) say well-being had no moral value (which is, I suppose, almost what impersonal views of value do say...).
Thanks, Michael, for your reaction. Clearly, “qualia depend on each other for having any value/meaning” is a too short sentence to be readily understood. I meant that if consciousness or sentience are made up of qualia, i.e. meaningful and (dis)valuable elementary contents of experience, then each of those qualia has no value/meaning except inasmuch as it relates to other qualia: nothing is (dis)valuable by itself, qualia depend on each other… In other words, one “quale” has a conscious value or meaning only when it is within a psychoneural circuit that necessarily counts several qualia, as it may be illustrated in the next paragraph.
Thus, suffering is not intrinsically bad. Badness here may have two distinct senses: unpleasant in an affective sense, or wrong in a moral sense. Both senses depend on other concomitant qualia than suffering itself to take on their value and meaning. For instance, stubbing your toe might not be unpleasant if you are rushing to save your baby from the flames, whilst it may be quite unpleasant if you are going to bed for sleeping… Or a very unpleasant occurrence of suffering like being whipped might be morally right if you feel that it is deserved and formative, whilst it may be utterly wrong in other circumstances...
Sorry, I really don’t follow your point in the first para.
One thing to say is that experience of suffering are pro tanto bad (bad ‘as far as it goes’). So stubbing your toe is bad, but this may be accompanied by another sensation such that overall you feel good. But the toe stubbing is still pro tanto bad.
Anyway, like I said, none of this is directly relevant to the post itself!
Okay, I realize that the relevance of neuroscience to the philosophy of well-being can hardly be made explicit in sufficient detail at the level of an introduction. That is unfortunate, if only for our mutual understanding because, with enough attention to details, the stubbing toe example that I used would not be understood as you do: if it is not unpleasant to stub your toe how can it be bad, pro tanto or otherwise?
I think we may well be speaking past each other someone. In my example, I took it the toe stubbing was unpleasant, and I don’t see any problem in saying the toe stubbing is unpleasant but I am simultaneously experiencing other things such that I feel pleasure overall.
The usual case people discuss here is “how can BDSM be pleasant if it involves pain?” and the answer is to distinguish between bodily pain in certain areas vs a cognitive feeling of pleasure overall resulting from feeling bodily pain.
We may sympathize in the face of such difficulties. Terminology is a big problem when speaking about suffering in the absence of a systematic discipline dealing with suffering itself. That’s another reason why the philosophy of well-being is fraught with traps and why I suggest the alleviation of suffering as the most effective first goal.
It’s not clear to me how one can believe 1) that there is nothing that ultimately explains what makes a person’s life go well for them, and 2) that we have an overriding moral reason to alleviate suffering. It would seem dangerously close to believing that we have an overriding moral reason to alleviate suffering in spite of the fact that it is not Bad for those who experience it. You might claim that suffering is instrumentally bad, that it makes it harder to achieve… whatever one wants to achieve, but presumably, if achieving whatever one wants to achieve is valuable, it is valuable because of the way in which it leads one’s life to “go well.” If that is the case, then you have a theory of well-being. If, on the other hand, achieving whatever one wants to achieve is not valuable in any absolute sense, then it is hard to say why it would be valuable at all, and you, again, would struggle to justify why suffering is a bad.
I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?
Hmm… 1) When an individual’s life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2) Do we have an overriding moral reason to alleviate suffering? In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no, I don’t think morality is paramount: it surely counts but many other things also count, more or less depending on the circumstances. I personally am concerned with the alleviation of suffering because it is a branch of activity that fits with my profile as a worker. But if I suggest that effective altruists should prioritize the alleviation of suffering, it is because that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.
I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.
Similarly, I disagree with your contention that morality isn’t, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”
Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside.
If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.
Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
Quick thoughts. The goal of effective altruism ought to be based on something more precise than the good of others defined as “well-being” because nothing is intrinsically or non-instrumentally good for a sentient entity when qualia depend on each other for having any value/meaning. As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.
I’m not quite sure I understand what you mean. My experiences have no value unless there is another experiencer in the world? If I’m the last person on Earth and I stub my toe, I think that’s bad because it bad’s for me, that is, it reduces my well-being.
Also, given your concerns, you’ll need to define suffering in a way that is distinct from well-being. If I think suffering is just negative well-being—aka ‘ill-being’ - then your concerns about well-being apply to suffering too.
Also also, if suffering isn’t instrinsically bad, in what sense is it bad?
Finally, I note that all of these concerns are about the value of well-being in a moral theory, which is a distinct question from what this post tackles, which is just what the theories of well-being are. One could (implausibly) say well-being had no moral value (which is, I suppose, almost what impersonal views of value do say...).
Thanks, Michael, for your reaction. Clearly, “qualia depend on each other for having any value/meaning” is a too short sentence to be readily understood. I meant that if consciousness or sentience are made up of qualia, i.e. meaningful and (dis)valuable elementary contents of experience, then each of those qualia has no value/meaning except inasmuch as it relates to other qualia: nothing is (dis)valuable by itself, qualia depend on each other… In other words, one “quale” has a conscious value or meaning only when it is within a psychoneural circuit that necessarily counts several qualia, as it may be illustrated in the next paragraph.
Thus, suffering is not intrinsically bad. Badness here may have two distinct senses: unpleasant in an affective sense, or wrong in a moral sense. Both senses depend on other concomitant qualia than suffering itself to take on their value and meaning. For instance, stubbing your toe might not be unpleasant if you are rushing to save your baby from the flames, whilst it may be quite unpleasant if you are going to bed for sleeping… Or a very unpleasant occurrence of suffering like being whipped might be morally right if you feel that it is deserved and formative, whilst it may be utterly wrong in other circumstances...
Sorry, I really don’t follow your point in the first para.
One thing to say is that experience of suffering are pro tanto bad (bad ‘as far as it goes’). So stubbing your toe is bad, but this may be accompanied by another sensation such that overall you feel good. But the toe stubbing is still pro tanto bad.
Anyway, like I said, none of this is directly relevant to the post itself!
Okay, I realize that the relevance of neuroscience to the philosophy of well-being can hardly be made explicit in sufficient detail at the level of an introduction. That is unfortunate, if only for our mutual understanding because, with enough attention to details, the stubbing toe example that I used would not be understood as you do: if it is not unpleasant to stub your toe how can it be bad, pro tanto or otherwise?
I think we may well be speaking past each other someone. In my example, I took it the toe stubbing was unpleasant, and I don’t see any problem in saying the toe stubbing is unpleasant but I am simultaneously experiencing other things such that I feel pleasure overall.
The usual case people discuss here is “how can BDSM be pleasant if it involves pain?” and the answer is to distinguish between bodily pain in certain areas vs a cognitive feeling of pleasure overall resulting from feeling bodily pain.
We may sympathize in the face of such difficulties. Terminology is a big problem when speaking about suffering in the absence of a systematic discipline dealing with suffering itself. That’s another reason why the philosophy of well-being is fraught with traps and why I suggest the alleviation of suffering as the most effective first goal.
It’s not clear to me how one can believe 1) that there is nothing that ultimately explains what makes a person’s life go well for them, and 2) that we have an overriding moral reason to alleviate suffering. It would seem dangerously close to believing that we have an overriding moral reason to alleviate suffering in spite of the fact that it is not Bad for those who experience it. You might claim that suffering is instrumentally bad, that it makes it harder to achieve… whatever one wants to achieve, but presumably, if achieving whatever one wants to achieve is valuable, it is valuable because of the way in which it leads one’s life to “go well.” If that is the case, then you have a theory of well-being. If, on the other hand, achieving whatever one wants to achieve is not valuable in any absolute sense, then it is hard to say why it would be valuable at all, and you, again, would struggle to justify why suffering is a bad.
Is there anyone who believes 1) and 2)?
I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?
Hmm… 1) When an individual’s life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2) Do we have an overriding moral reason to alleviate suffering? In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no, I don’t think morality is paramount: it surely counts but many other things also count, more or less depending on the circumstances. I personally am concerned with the alleviation of suffering because it is a branch of activity that fits with my profile as a worker. But if I suggest that effective altruists should prioritize the alleviation of suffering, it is because that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.
I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.
Similarly, I disagree with your contention that morality isn’t, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”
Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside.
If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.
Excellent response! I’ll think about it and come back to let you know my thoughts, if you will.
Thank you — please do!
Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
Thanks for your interaction.