I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?
Hmm… 1) When an individual’s life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2) Do we have an overriding moral reason to alleviate suffering? In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no, I don’t think morality is paramount: it surely counts but many other things also count, more or less depending on the circumstances. I personally am concerned with the alleviation of suffering because it is a branch of activity that fits with my profile as a worker. But if I suggest that effective altruists should prioritize the alleviation of suffering, it is because that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.
I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.
Similarly, I disagree with your contention that morality isn’t, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”
Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside.
If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.
Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
I’m not sure, but it seemed to me that this was the view that you were defending in your original comment. Based on this comment, I take it that this is not, in fact, your view. Could clarify which premise you reject, 1) or 2)?
Hmm… 1) When an individual’s life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2) Do we have an overriding moral reason to alleviate suffering? In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no, I don’t think morality is paramount: it surely counts but many other things also count, more or less depending on the circumstances. I personally am concerned with the alleviation of suffering because it is a branch of activity that fits with my profile as a worker. But if I suggest that effective altruists should prioritize the alleviation of suffering, it is because that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.
I suspect there may be too much inferential distance between your perspective on normative theory and my own for me to explain my view on this clearly, but I will try. To start, I find it very difficult to understand why someone would endorse doing something merely because it is “effective” without regard for what it is effective at. The most effective way of going about committing arson may be with gasoline, but surely we would not therefore recommend using gasoline to commit arson. Arson is not something we want people to be effective at! I think that if effective altruism is to make any sense, it must presuppose that its aims are worth pursuing.
Similarly, I disagree with your contention that morality isn’t, as you put it, paramount. I do not think that morality exists in a special normative domain, isolated far away from concerns of prudence or instrumental reason. I think moral principles follow directly from the principle of instrumental reason, and there is no metaphysical distinction between moral reasons and other practical reasons. They are all just considerations that bear on our choices. Accordingly, the only sensible understanding of what it means to say that something is morally best is: “It is what one ought to do,” (I am skeptical of the idea of supererogation). It is a practical contradiction to say, “X is what I ought to do, but I will not do it,” in the same way that it is a theoretical contradiction to say, “It is not raining, but I believe it’s raining.” Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best). To me, that sounds like saying, “EA should do X regardless of whether or not EA should do X.”
Regarding the idea of intrinsic value, I think what Fin, Michael et al. meant by “X has intrinsic value” is “X is valuable for its own sake, not for the sake of any further end or moral good.” This is the conventional understanding of what “intrinsic value” means in academic philosophy. Under this definition, if there is an ultimate reason that in fact explains why an individual’s life is Good or Bad, then that reason must, by virtue of the logical properties of the concepts in that sentence, have grounding in some kind of intrinsic value. But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad. In this case, however, I do not think it is possible to justify why we could ever have an overriding moral reason to do anything, including to eliminate an eternal Hell, as we could not justify why that Hell was Bad for those individuals who were stuck inside.
If you wanted to justify why that Hell was bad for those stuck inside, and you were committed to the notion that the structure of value must be determined by the subjective, evaluative judgments of people (or animals, etc.), you would wind up—deliberately or not—endorsing a “desire-based theory of wellbeing,” like one of those described in this forum post. However, as a note of caution, in order to believe that the structure of value is determined entirely by people’s subjective, evaluative judgments, probably as expressed through their preferences (on some understanding of what a preference is), you would have to consider those judgments to be ultimately without justification. Either I prefer X to Y because X is relevantly better than Y, or I prefer X to Y without justification, and there are no absolute, universal facts about what one should prefer. I think there are facts about what one should prefer and so steer clear of such theories.
Excellent response! I’ll think about it and come back to let you know my thoughts, if you will.
Thank you — please do!
Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
Thanks for your interaction.