All opinions are my own unless otherwise stated. Geophysics and math graduate with some web media and IT skills.
Noah Scales
Thank you, Karthik
I don’t have much time and don’t expect much attention regardless of my time input to writing about this topic. It is boring, frankly. I am a boring writer. The best that I can do is keep it short.
Altruistic value is not objectively measurable. If a creature like God existed, then she could judge the altruistic value of actions in terms of their consequences. Everyone else makes do with unreliable mental models that are bound by uncertain future circumstances.
As a brief thought experiment, if you have a sense that an action (for example, a large donation to a reliable effective charity) is altruistic, then you have made a judgement of the altruistic value of that donation. Other actions, in fact, all actions, are vulnerable to the same thought experiment. The only result is to make explicit what you already think.
I could offer my sense of true failings of the EA community to make better judgements among specific available options of behavior in certain situations, but those would be context bound, controversial, and with results that I don’t think would be worth my time. Besides, I don’t care, per se, whether the EA community continues to have blind spots about certain common evil actions and continues to perform them. It’s a big world.
I just heard about this contest and thought, hmmm, how to summarize a helpful suggestion for improvement to EA, a little thought experiment of my own.
Sorry I could not put in the effort that I see others do here, but I promise you that my efforts are well-intended and sincere.
I bought a GPU some years ago. My belief is that it’s consequences were negligible or a small evil, so mildly anti-altruistic.
it was built by exploited labor. It’s simply self-serving to suggest that exploiting labor is a necessary step in modernizing another country. In fact, the definition of exploitation lets me know that the labor is being treated harmfully. However, the actual suffering that the purchase itself caused is negligible in terms of encouraging additional exploitation because of how big the market is for GPU’s. Notice I’m looking at the future from the point that I buy the GPU, not what the GPU cost in human suffering that went into it. I only think of that when I decide whether my purchase encourage continuation of that human suffering.
it served no altruistic purpose, it turned out. I might as well not have bought the thing, for all the need I had for the GPU RAM. The CPU’s ability to render graphics was more than sufficient for all my needs. I don’t game much. Furthermore, nothing I did on the computer particularly benefited others. Me, mildly, but not others.
it was or soon will be e-waste, and a large chunk of e-waste. E-waste is harmful to the environment and poisons people because of how it is handled on disposal. I knew that in advance. However, this was only a mild evil because one GPU is not that polluting or poisonous on its own.
If I were a gamer, my gaming would not contribute to the welfare of others. Again, gaming would be a selfish act or a small evil.
To establish the altruism of the consequences of the GPU purchase (and use), I score its consequences as I see them. I’m not that sophisticated, so I rely on a two axis analysis. The positive X-Axis is positive altruism. The negative Y-Axis is negative altruism, anti-altruism. So X is how good, minus Y is how evil. X goes to 100, Y to −100. Off the top of my head, I’m going with (0,-2) for the GPU purchase. There were no altruistic consequences but there were a few mildly evil consequences.
To compare the altruistic values of the consequences of the GPU purchase with those occurring if I do not purchase it, I calculate a distance between the two. I need to have some sense of what I do without the GPU. I assume that I simply went on with my lifestyle without the GPU purchase. The desktop computer that I purchased anyway becomes e-waste, has a similar origin and so contributes to similar exploitation, and again my use (it turns out) doesn’t really benefit anyone else, so (0,-6), because it’s 3x the ewaste of the GPU and again my purchase encouraged electronics manufacturing negligibly because I bought everything new but so did millions of others.
To compare apples to apples, I need to compare altruistic value of the computer purchase with the GPU to that of the computer purchase without the GPU. Relying on simple addition to calculate component values, (0,-8) is the score of the computer purchase with the GPU compared to (0,-6) without the GPU. I can calculate a Euclidian distance between them, it’s just 2. The GPU purchase alone didn’t change the consequences much between the two actions.
I can compare the two options in terms of scale, (0,-8) and (0,-6). Here I feel my math suffers for lack of options, so for now I’m going with a comparison of the distances of the two points from (0,0) to decide the scale of each action that I want to compare. The two actions are: computer purchase with GPU and computer purchase without GPU. I can say that the purchase of a computer with a GPU is 33% (8/6)more evil than just the purchase of a computer with the mobo, CPU, power supply, keyboard, and mouse.
Explaining this took a lot longer than writing down (0,-6), (0,-8), 2, and 33%. The numbers are relative, subjective, and controversial, and that’s why I suggested this analysis for the EA community. The numbers might have more value to collective decision-making as intersubjective values. Remember, this is on a scale of magnitudes from 0-100 on each axis. For example, on the EA forum, someone might give me information that a chunk of e-waste independently raises risk of cancer in 3 people to 1⁄12, then I could factor that in, “Hmm, my new computer purchase with a GPU, at least 4 chunks of e-waste, causes cancer in some person later”, so now the scores are (0,-60) and (0,-45) . It wouldn’t matter so much what specific numbers I choose but more that the mathematics of my choice decide a very different altruistic value of GPU purchases (and electronics purchases in general) than previously. Armed with my new information, I might decide to buy a used computer and start dividing the consequences of its eventual turn to e-waste with its previous owner.
Or if I decided that my computer use was altruistic, “Hmm, I did some research with it that saved some people from some unnecessary suffering in their lives”, then the scores might be (15,-8) and (15,-6), for example, with a distance of 2 between the points but a smaller scale difference of about 5% (17/16.16). Now the GPU purchase has less influence on the overall impact of my computer purchase because of how I used the computer. If the GPU purchase enabled some specific altruistic use of my computer, then that percentage difference in altruistic value scale (size) would start going up and so would the distance of altruistic value between the two purchase options. Interestingly, if I knew that my purchasing a new computer effectively gave someone else cancer later, then my altruistic use of the computer is obviously inadequate to justify the purchase. Food for thought.
Here’s a few final thoughts:
My model of the altruism of electronics hardware purchases considers only two evils (of which only the e-waste problem is impactful in a market of this scale) and one potential good (how I use the hardware).
I had paid attention to the importance of the GPU purchase over time, that’s how I knew the GPU did not contribute to my computer use. Another non-gamer might not be able to judge because they never gathered the information.
I applied a hard test, “prove your use was altruistic” and really couldn’t prove to myself that anything I did with my computer benefited anyone else. That the possibility exists doesn’t bother me, but I can’t include it in my calculations.
I chose a two-dimensional representation for altruistic/anti-altruistic value because positive and negative consequences don’t cancel each other out.
This was some quick and boring computation, but notice how things started to change once I “found out” about the cancer risk caused by the e-waste or when I could identify altruistic consequences of my computer use in a two-dimensional space of altruistic value.
a modified cosine distance would better represent the difference in closeness of altruistic value to good or evil. There are other possibilities too.
So Jackson, thanks for your interest and comments. I analyzed a GPU purchase, mine. I hope you found it interesting.
You wrote,
“Analyzing the ethical impact of everyday decisions (like about where to live, how to commute, what to eat, who to vote for, etc) is essentially a pitch for “microprojects”, and would be more suited to a world where there were very many more people interested in EA but much less funding available.”
Hmm, yes. Pragmatically, I wouldn’t want to insult the ethics of wealthy charitable givers when their contributions can count for so much and they will earn their money however they do. I see that my suggestion is naive and possibly a poor fit to the EA community.
The initial sentences play on the word “authority”. Barracuda implies that authority is a name for those with resources used in EA causes, that EA folks have resources, and that their elevated authority is something they prefer to keep while they will share their wealth only. Barracuda states that EA efforts are not intended to further causes associated with social justice or democracy, but rather socioeconomic equality or health only.
Basically, I take the criticism to be that EA depends on, or does not address or correct, political inequality.
Don’t consider the act of choosing to be an action that is subject to an altruistic value score calculation of its potential consequences. By potential consequence I mean a consequence that you believe in. For some, such a consequence would be all the actions that you did not take.
Keep in mind that altruistic value consequences are based on self-reports. Altruistic value calculations are what you do for yourself with yourself.
You wrote
“There is a substantial philosophical literature on such topics that I will not wade into, and I believe such non-value-based arguments can be mapped onto value-based arguments with minimal loss (e.g., not having a duty to make happy people can be mapped onto there being no value in making happy people).”
Duty to accomplish X implies much more than an assessment of the value of X. To lack the (moral, legal, or ethical) obligation to bring about a state of affairs does not imply a sense that the state of affairs has no value to you or others.
Bivalves are a big part of the US fishing industry. You can explore some of the risks to them by looking over the recent history of their cultivation in the US and globally.
Ocean acidification, waste water outlets, garbage dumping, and storm water runoff are threats to pop-up farms over the next few decades. After that, acidification combined with temperature and pollution could be too damaging, either to farming efforts or to the quality of the food.
Ocean currents near shore can produce lower pH (e.g., 7.65 as opposed to global avg 8.04 (edit:8.1, not because I believe it but because that’s the consensus) from upwelling, colder, more acidic water on the west coast of the US) in coastal waters. Bivalves are sensitive to increased acidity of ocean water. Their fertilization rates decrease and their juvenile mortality increases. There might be an effect on their maturation size as well.
Ocean average pH decline has one estimate pinning it at 7.8 by 2094. It is currently 8.1 and dropping. That avg allows wide variation in the availability of carbonate and calcium ions for shell formation in different waters across the globe. Heat maps show largest declines in availability of carbonate near the poles and with unequal distributions around the equator. Measurement data from 2006, I think, shows recent changes in carbonate chemistry occurring in the top 200 meters of the ocean, where marine ecosystems are most productive.
I believe that marine biologists would agree that loss of shell-forming organisms in the ocean would create a ripple effect throughout the world’s oceans. The discussions I have reviewed so far suggest that sea butterflies, a shell-forming marine animal that is food for larger fish we know, will die out under certain environmental stresses, emptying the ocean of their predators. That pathway to a die-off of marine life is identified repeatedly. Maybe because it matters to the commercial fishing industry. Without sea butterflies, the major food source for fish that we like to eat will be gone. The question then is whether ocean chemistry will allow widespread bivalve production for a significant time period.
I can’t find consensus estimates of the timing of marine life die-offs triggered by the loss of sea butterflies. pH avg change models from NOAA suggest that pH reaches 7.8 before the end of the century. That is below the point where the shell dissolves off a sea butterfly (sea snail) body. Is that pH enough to kill all bivalves? I don’t know, but you could probably answer that question easily.
Interestingly, the only public claim of the likely death of marine life as a whole within this century that I could find credits multiple simultaneous stresses on marine life, including a massive poisoning of plankton by pollutants riding on micro-plastics that plankton consume combined with a loss of shell-bearing organisms at a global avg pH of just 7.95. The source of that claim pins the outcome as occurring by 2050. That is not a consensus opinion but there’s not much to contradict it, just lack of research and lack of attention. An implication of that claim is that the ocean is no longer supplying oxygen to the atmosphere.
EDIT: there is one area where there is some consensus, it’s that the coral reefs of the world will all be lost by 2050. That is a tipping point for ocean ecology.
Given the lead time of any plan to increase bivalve farming, jellyfish might do better as food from the ocean once the larger problem is recognized after the first global famine of the century is over. There will probably be multiple food shortages, mishandling of those shortages, and lack of preparation for further shortages this century that approach a global famine at least once and probably twice.
In the meantime, people everywhere will probably prefer fish like salmon and tuna and shrimp as seafood rather than exclusively bivalves.
Speaking for myself, I’m allergic to shellfish. Bivalves are a common allergen food. After trying some cricket flour in a protein bar, I developed a case of hives. Apparently an allergy to shellfish implies an allergy to insects because of some kind of biological similarity.
Sure, I think you guys (and those folks at the UN) and the general topic of food security is incredibly important.
I am not working in this area professionally, nothing even close to it.
Hi Brian, short answer, yes. Of course.
Look into jellyfish as a food source.
The death of the oceans is progressive and predictable, if you assume that causes of it continue into the future.
Thank you for your comment. I think that in some cases, subconscious calculations of expected value motivate actions.
But I don’t think that expected value calculations faithfully (reliably or consistently) represent a person’s degree of conviction (or confidence) that an outcome occurs given the person’s actions.
In particular, I suggest alternatives at work when people claim that their decision is to choose an expected value of very low probability and very high value:
that those people would have fun during the pursuit and so choose the pursuit of the outcome
that those people have a hidden (benign) agenda behind their pursuit of the outcome
that those people have an idea of what they could (dubiously) achieve that seems real and therefore, somehow, likely, or even certain.
What I don’t think they have is a strong expectation that they will fail. We are not wired to meaningfully pursue outcomes that we believe, really believe, will not occur. What they should have, if their probability estimate of success gets really low, is a true expectation that their efforts will fail. In some cases they don’t have that expectation because subconsciously they have a simple, clear, and vivid idea of that unlikely outcome. They pursue it even though the pursuit is high cost and the outcome is virtually impossible.
Well-studied Existential Risks with Predictive Indicators
It’s awkward to interpret mathematical judgements about a value that is described as an unknown and then as a supposition about one’s internal process of deciding an arbitrary value for the unknown and finally as a possible range varying over a large magnitude for that unknown. That is what I decided that the report on consciousness (and the speculation about moral weights) describes.
I would like to learn more about how EA folks typically assign evidence for the presence of different kinds of consciousness or moral weight of different species. In particular, what evidence helps you decide the presence of different aspects of consciousness in specific amounts? What evidence helps you decide the moral weight of a person of one species relative to another?
Finally, What is EA speculation about more traditional models of morality that rely on a moral identity, judgements of right and wrong, and in particular, the symbolic importance of actions, even when they have (potentially) minimal verifiable consequences for others (for example, catching a fly and releasing it outside)?
I took the clock speed, unity, and intensity factors to be the aspects of consciousness about which one gathered evidence.
Total hedonic utilitarianism is mathematically interesting. I should explore its logical implications.
I appreciate what you describe as heuristics. In my everyday life I apply heuristics.
Morality is informed by heuristics that determine consequences of actions or by heuristics that determine the symbolic content of actions (their subjective or intersubjective meaning).
EDIT: morality is also informed by heuristics that determine intentions of actions, irrespective of consequences of actions, but that was not my interest here.
I wonder what heuristics the EA community officially acknowledge as relevant to understanding the level of consciousness or moral weight of beings from other species.
The value of a longtermist view depends on the control you believe that you can exert over the future. While you might find moral value in creating an actual future, a hypothetical future that you believe that you will not create has no moral significance for your present actions, in terms of number of lives present, circumstances present at that future time, or any other possible feature of it.
Put differently: to declare a possibility that your actions turn out to be necessary (or even sufficient) causes of future events, but without believing that those future events will necessarily occur after your actions, is too imply that the consequences of your actions lack moral significance to you. And that’s longtermism in a nutshell, just actions in pursuit of an implausible future.
How do you derive the credence you give to each moral view you hold, by the way, those numbers like 60%? What do those percentages mean to you? Are they a historical account of the frequency of your actual moral views arbitrarily occurring, one then another in some sequence, or are they a subjective experience of the amount of belief in each view that you hold during a particular instance of comparing different moral views, or something else? Are they a “belief in your beliefs”? Are you assigning probabilities to the outputs of your self-awareness?
Climate change is Now Self-amplifying
“They conclude that temperatures can rise to 11 degrees Celsius before the share of uninhabitable areas begins to include most of today’s population. Even at 10 degrees Celsius, the damage is equivalent only to a recession that would set us back by 20 years. However, this still does not imply extinction.”
Do I understand you right, that they conclude that a rise of global average temperatures by 11 degrees Celsius will then begin to make uninhabitable areas of locations currently inhabited by people?
yes, but I wonder if my restatement is a correct interpretation of what the OP meant.
Hello Linch.
Thanks for your comment. Sure, after browsing the post that you referenced, I have a response to it that I can share with you.
As I wrote already, tipping point mechanisms are not incorporated into IPCC carbon budgets. I don’t believe that their potential forcings are integrated into policy models of temperature change due to anthropogenic emissions in general.
As information about actual changes to tipping points increases, estimates of risk of those tipping points forcing temperature upward increases, but there is still a delay between research findings and their integration into consensus belief in the climate science community. There might be different reasons in different cases.
Actual data shows some tipping points (arctic ice melt, permafrost melt, rainforest losses, Greenland ice melt, global wind patterns, West Antarctic ice melt) beginning their forcings now and at an accelerating rate.
Older models of changes in large natural systems like plankton populations or Greenland ic sheets (and available data about them) let steady state approximations of those systems seem reasonable over this century. However, because climate scientists see more changes than they expect at lower amounts of temperature change, those approximations are no longer reasonable.
I believe that the equilibrium climate sensitivity metric that the authors quote does not account for tipping point processes forcing temperature increases to any great extent. I also doubt its reliability even if steady state approximations for tipping points were reasonable, but I don’t have answers for those doubts now.
The value of the ocean as a carbon sink has declined since the start of the industrial revolution. It will continue to decline, but at rates that are not well-studied yet. The authors of the “Good news” article expect atmospheric CO2 to decline as land and ocean sinks absorb it. However, anthropogenic global warming, other anthropogenic forcing of natural systems (for example, poisoning plankton with micro-plastic or burning down rainforest), and indirect climate interactions reduce the capacity of those sinks. How much and with what consequences will be observable over this century.
Global warming is well-studied in the sense that climate scientists can make broad statements about eventual impacts and suggest policies that will avoid the continuation of global warming mechanisms. Or rather, that is how it used to be. Now that tipping points are an immediate concern, climate scientists have to play catch up with actual global warming events and revise their estimates of eventual impacts as new amplifying mechanisms are measured in action and models are revised to include more features or greater detail. There is not enough consensus on tipping point mechanisms or their consequences between scientists working within governments and those with more freedom to speak. From what I can tell, there are, and might continue to be, peer-reviewed studies of alarming evidence for future climate change impacts that the IPCC will not mention in its policy guidance because of filtering by government editors.
The authors of the ‘Good news on climate change’ post say:
″...in order for us to follow SSP5-RCP8.5, there would have to be very fast economic growth and technological progress, but meagre progress on low carbon technologies. This does not seem very plausible. ”
I disagree. To follow RCP8.5, there would need to be continuing economic and population growth but not technological progress sufficient to displace dirty technologies.
Nanotechnological architecture, product manufacture, and pollution clean-up (as originally envisioned by Eric Drexler in “Engines of Creation”) has value because that hypothetical technology will allow mega-engineering projects over time frames of a few days or weeks. With large-scale nanotechnology applications, humanity can develop and deploy global geoengineering tools, test them, and remove them if they create more problems than they solve. Meanwhile, our civilization can use nanotechnology to achieve energy efficiencies and clean technologies that replace dirty technologies and clean up their pollution quickly, within useful time frames of just a few months. Fantastic!
With nanotechnology in use, humanity doesn’t have the problems of:
* poverty preventing change-over to low-GHG technologies at scale.
* using high-GHG technologies to manufacture low-GHG technologies at scale.
* acquiring the source materials for the enormous manufacturing output required
* taking longer than the time available to produce and integrate low-GHG technologies into our economies.
* industrial and construction waste damaging the biosphere or increasing global warming.Nanotechnology helps humanity avoid the scrooge path of increasing its population while resource limits, biosphere losses, and climate change reduce quality of life. I would not propose nanotechnology as a solution, but humanity needs working methods of cooling the planet and thriving in the meantime. Nanotechnology would work.
If nanotechnology of the sort I’m describing becomes a reality, then that will be the news that the post authors want.
“In any attempt to do good, not actual consequences, but Expected Value matters.”
You ascribe probabilities to outcomes of your actions in your expected value model of your actions.
Accordingly, the altruism of the consequences of your actions is only certain in your mind when you believe that you know the actual consequences.
You might believe that you only know the actual consequences:
through retrodiction (understanding your actions’ past consequences from a present perspective).
through prediction (having certainty about your actions’ future consequences).
through observation (observing the consequences occur)
through a thought experiment (you’ll never know the real consequences of your actions).
through control (controlling what happens through your actions).
through real-time involvement (interacting with what happens as it occurs during your actions).
by some other mechanism (for example, moralist prescriptions of specific actions and predictions of consequences).
Do you believe that you do good for others through your actions before you believe that you know the actual consequences of those actions?
Do you believe that some actions you choose among cause good consequences at the time that you choose among them?
If you hold those beliefs, then why do Expected Value calculations matter to your doing good for others?
You are welcome.
I have seen plenty of discussion from climate scientists warning against comparing the Anthropocene with earlier geologic time periods, and comparing humans with mammals alive during an earlier period is not an apples-to-apples comparison.
FYI, I believe that locking in a rise in global avg temperature of 14C can happen within a 100-year time frame. If it happens, Earth will certainly be uninhabitable but the mammals could all be dead before then.
My hopes for our long-term future rest with engineering technology that doesn’t exist yet.
Hello.
Ideas to improve the Effective Altruism movement include:
* include scoring, ranking, and distance measures of the altruistic value of the outcome of all personal behaviors, including all spending behaviors.
* research the causal relations of personal behaviors and the altruistic value of the consequences of personal behaviors.
* treat altruistic value as a relative and subjective metric with positive, null, and negative possible values.
* provide public research and debate on the size and certainty of altruistic values assigned to all common human behaviors (by individual EA practitioners).
Successful implementation of these ideas yields:
* robust maps of consequences of all personal behaviors and their relative altruistic value.
* an end to context-limited assessments of one’s effective altruism over one’s life so far.
-Noah