Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
Thanks for your interaction.