RobertDaoust
Hi Derek, just in case something in there would be useful to you: https://docs.google.com/document/d/1OTCQlWE-GkY_V4V-OfJAr7Q-vJyIR8ZATpeMrLkmlAo/edit
It is very high-impact when survival is considered indispensable to have control over nature for preventing negative values from coming back after extinction.
You have this reference: https://ieeexplore.ieee.org/document/9001063/authors#authors where the first paragraph reads:
“In the last year, the Association for Computing Machinery (ACM) released new ethical standards for professional conduct [1] and the IEEE released guidelines for the ethical design of autonomous and intelligent systems [2] demonstrating a shift among professional technology organizations toward prioritizing ethical impact. In parallel, thousands of technology professionals and social scientists have formed multidisciplinary committees to devise ethical principles for the design, development, and use of artificial intelligence (AI) technologies [3]. Moreover, many governments and international organizations have released sets of ethical principles, including the OECD Principles in 2019 [4], the Montreal Declaration in 2017 [5], the U.K. House of Lords report “AI in the U.K.: ready willing and able?” in 2018 [6], the European Commission High-Level Expert Group (HLEG) on AI in 2018 [7], and the Beijing AI Principles in 2019 [8]. Indeed, recent reports indicate that there are currently more than 70 publicly available sets of ethical principles or frameworks for AI, most of which have been released within the last five years [3], [9], [10].”
I am just wondering if your review would not be more complete by mentioning that kind of work. The IEEE for instance has this page: https://ethicsinaction.ieee.org/
We may sympathize in the face of such difficulties. Terminology is a big problem when speaking about suffering in the absence of a systematic discipline dealing with suffering itself. That’s another reason why the philosophy of well-being is fraught with traps and why I suggest the alleviation of suffering as the most effective first goal.
Okay, I realize that the relevance of neuroscience to the philosophy of well-being can hardly be made explicit in sufficient detail at the level of an introduction. That is unfortunate, if only for our mutual understanding because, with enough attention to details, the stubbing toe example that I used would not be understood as you do: if it is not unpleasant to stub your toe how can it be bad, pro tanto or otherwise?
Inferential distance makes discussion hard indeed. Let’s try to go first to this focal point: what ultimate goal is the best for effective altruists. The answer cannot be found only by reasoning, it requires a collective decision based on shared values. Some prefer the goal of having a framework for thinking and acting with effectiveness in altruistic endeavors. You and I would not be satisfied with that because altruism has no clear content: your altruistic endeavor may go against mine (examples may be provided on demand). Some, then, realizing the necessity of defining altruism, appeal to well-being as what must be promoted for benefitting others (with everyone’s wellbeing counting equally, says MacAskill). That is pretty good, except that what constitutes well-being remains in question, as the opening post here states. Your conception of well-being may go against mine (examples may be provided on demand). Some, at last, realizing the necessity of a more precise goal, though of course not perfect, suggest that prioritizing the alleviation of suffering is the effective altruistic endeavor par excellence.
I am arguing against well-being and for suffering-alleviation as ultimately the best goal for effective altruists.
Now, one of my arguments is simply that suffering-alleviation is better than well-being because “that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.”, and as I wrote in a previous comment “As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.” You invoke that the end goal is more important than such considerations. You are right, of course, but you seem to have overlooked that in order to specify that I was speaking only about effectiveness, I added “this being the case whether that purpose is morally good or not.”
The same applies, I think, when you say “Hopefully, this clarifies how confounding I find the perspective that EA should prioritize alleviating suffering regardless of whether or not doing so is morally good, as you put it (which is surely a lower bar than morally best).”
Our views differ on morality being paramount or not. If your profession is ethicist, or if you are a highly virtuous person, I understand that morality may be paramount for you, and I think such persons are often useful. But personally, notwithstanding practical contradiction, I might go against any ought for various reasons, for instance just because I do not believe that my moral judgment is always right. More importantly, I think that science, and especially the science of suffering, is not subordinated to ethics or any other sphere of human activity, except in very exceptional circumstances.
Finally, perhaps we might ease our discussion by clarifying the following. You wrote “But I think your argument is actually that there isn’t anything that in fact explains why an individual’s life is Good or Bad.” Well, I believe, as you put it, that “the structure of value must be determined by the subjective, evaluative judgments of people”. I cannot see why you add “you would have to consider those judgments to be ultimately without justification.” Is it because you consider that a subjective judgment is not a fact? Do you really think that “there are no absolute, universal facts about what one should prefer” if there are only subjective preferences? If I actually prefer yellow to red, is it not an absolute, universal fact about what I should prefer? Collectively it is more complicated… we are not alone and no two preferences are ever exactly the same… so it seems that we can come to collective preferential judgments that are ultimately justified, but not absolute and not based on universal facts.
Thanks for your interaction.
Excellent response! I’ll think about it and come back to let you know my thoughts, if you will.
Hmm… 1) When an individual’s life is evaluated as good or bad there may be an ultimate reason that is invoked to explain it, but I would not say that an ultimate reason has an intrinsic value: it is just valued as more fundamental than others, in the current thinking scheme of the evaluating entity. 2) Do we have an overriding moral reason to alleviate suffering? In certain circumstances, yes, like if there is an eternal hell we ought to end it if we can. But in general, no, I don’t think morality is paramount: it surely counts but many other things also count, more or less depending on the circumstances. I personally am concerned with the alleviation of suffering because it is a branch of activity that fits with my profile as a worker. But if I suggest that effective altruists should prioritize the alleviation of suffering, it is because that’s ultimately the most effective thing that we can do in terms of our current capacity to act together for a common purpose, this being the case whether that purpose is morally good or not.
Is there anyone who believes 1) and 2)?
Thanks, Michael, for your reaction. Clearly, “qualia depend on each other for having any value/meaning” is a too short sentence to be readily understood. I meant that if consciousness or sentience are made up of qualia, i.e. meaningful and (dis)valuable elementary contents of experience, then each of those qualia has no value/meaning except inasmuch as it relates to other qualia: nothing is (dis)valuable by itself, qualia depend on each other… In other words, one “quale” has a conscious value or meaning only when it is within a psychoneural circuit that necessarily counts several qualia, as it may be illustrated in the next paragraph.
Thus, suffering is not intrinsically bad. Badness here may have two distinct senses: unpleasant in an affective sense, or wrong in a moral sense. Both senses depend on other concomitant qualia than suffering itself to take on their value and meaning. For instance, stubbing your toe might not be unpleasant if you are rushing to save your baby from the flames, whilst it may be quite unpleasant if you are going to bed for sleeping… Or a very unpleasant occurrence of suffering like being whipped might be morally right if you feel that it is deserved and formative, whilst it may be utterly wrong in other circumstances...
Quick thoughts. The goal of effective altruism ought to be based on something more precise than the good of others defined as “well-being” because nothing is intrinsically or non-instrumentally good for a sentient entity when qualia depend on each other for having any value/meaning. As to prioritization, the largest common goal ought to be the alleviation of suffering, not because suffering is bad in itself but because we agree much more on what we don’t want than on what we want, and the latter can be much more easily subordinated to the former than the contrary.
Thanks, Michael, that deserves an entry in https://docs.google.com/document/d/1OTCQlWE-GkY_V4V-OfJAr7Q-vJyIR8ZATpeMrLkmlAo/edit#
Suffering-Focused Ethics: Defense and Implications, by Magnus Vinding.
I like your thesis, Pedro, because when I look at its chapter on “Why the Future Matters” and “Optimal Control Theory”, I think that useful links could be established between it and the long-term project of the Algosphere Alliance about organizing the alleviation of suffering in the world. As I wrote recently: in my view, the causes of severe suffering are so many and so diverse that the current pandemic is still only a small part of the issue when it comes to organizing global efforts to alleviate suffering. It is necessary to deal with small parts, but it is very ineffective. Working on root causes may seem remote, abstract, idealistic, but isn’t it, in fact, the opposite? Every specialized effort is a step in the right direction. The most fundamental cause, however, and the most effectively altruistic, is to care about suffering itself, and its universal alleviation. The Algosphere Alliance invites everyone to do just that.
Good work ! I am including a link to it in my Preparatory Notes for the Measurement of Suffering, where perhaps you will find other useful measuring methods.
Thank you for this work, Marius, it fits well into a systematic approach that should be developed, as suggested in Preparatory Notes for the Measurement of Suffering.