Welcome back! If you missed it, part 1 details some background on this post series, which in essence is a collection of my reflections after taking the CEA Intro to EA Virtual Program course this past summer. You can find additional previous parts here: part 2 & part 3. This post is for Chapter 4: Our Final Century?in the EA Handbook.
My awesome facilitator during the EA virtual program course (and the first person I’ve ever exchanged EA thoughts with) was kind enough to edit and help me further enrich this post. Thank you @NBjork!
Let’s get to it:
As mentioned in the last chapter about radical empathy, helping the world can look like saving lives, extending lives, and/or reducing suffering. That help should be the most effective, regardless of geographical, species, or time constraints. In this chapter the conversation takes a turn to focus more on existential risks, thereby the focus is on saving and extending lives. One might understand existential risks as being the potential to continue to improve the world, for if we don’t exist anymore, our options for helping have already been extinguished. For this reason, reducing existential risk is a primary moral imperative for many individuals and organizations. Our first priority is to survive and so long as that’s the case there should be some margin of hope to continue to save lives, extend lives, and reduce suffering. Other things we care about (justice, equality, art, culture, environment, etc) are all things that would have a greater than zero chance to flourish to unprecedented levels in the future, but the first requirement is that an existential risk does not transpire first.
Naturally we should try to avoid tragedies in the world because often they end/shorten many lives and cause great suffering. However, the benchmark for something to be considered a tragedy is very subjective and can have an extraordinary range of impact. A mass shooting is a tragedy, but an earthquake is as well. So is war, genocide, a pandemic, starvation and even suicide. That means that even though a tragedy is certainly a negative, the magnitude of that negativity varies. Existential risk supersedes tragedy. The appropriate way to think of extinction is not by identifying it as a tragedy, but by thinking of it as the equivalent of the “final tragedy”.
Within the “scale, neglectedness and solvability” framework, many EA organizations agree that safeguarding the future is the highest goal. However, that is not always easy to show using quantitative results. Longtermist interventions “may” save lives if future events take place in the way we expect, but if things transpire differently, it’s possible it was wasted effort. For example, say if we never experience a dangerous AI, because 10% of the AI safety work that was done worldwide was responsible for eliminating such a risk, in hindsight, the other 90% of our effort was a waste and we should have expended that energy elsewhere. Unfortunately we can’t really say today what part of our x-risk efforts are not going to be impactful, so we are stuck in this dilemma. In contrast, most interventions helping today can correlate quite closely to how many lives they actually help with clear and straightforward evidence. For such interventions 100% of the effort put into them is impactful, the question is of the degree to which they’re impactful.
Marginal impact speaks to your own personal return created in your work. An organization may be making impact X, and when you join it, you add to it your own marginal impact (Y), and the outcome of X + Y = Z. Now X may be huge, or it may be little, but it’s not the only thing that should be considered. Z should be considered to a greater degree. So let’s say for another organization their impact is A. Your impact is still Y, but when combined A + Y = B. Don’t simply compare the X and A values of two separate organizations, also compare (and with a larger weight) the end result of your impact as well. What is best, the final output of Z, or B?
But I would like to remind everyone that some factors, especially concerning personal fit, can sometimes be very tricky to analyze using mathematical formulas and logic. Humans are not computers after all, so provide your human interests and motivations some space to reside and grow in combination with a quantitative analysis
Those who may be new to EA, as I am, may note that they have heard about climate change for most of the recent decades. From an individual to a national to a global level countless efforts related to improving climate change have been inserted into our daily lives and conversations. The future impacts of climate change can be without a doubt devastating, and (to use a word highlighted earlier in this post) tragic. Many climate change after-effects are already resulting in tragedies around the planet. However, as an EA the conversation isn’t just about tragedies and catastrophes, but their potential, ie about extinction. Those analysis for the most part reveal that the risks of climate change will be devastating and catastrophic, but not at the extinction threshold. For this reason in the EA community, climate change receives a lower risk designation compared to some other cause areas. It is acknowledged however that climate change will increase the other existential risks in our world.
Thank you all for taking the time to read through these reflections and feel free to leave any feedback you think relevant. I am especially open to resources that expand on these thoughts further!
Part 4: Reflections after attending the CEA Intro to EA Virtual Program in Summer 2023 – Chapter 4: Our Final Century?
Welcome back! If you missed it, part 1 details some background on this post series, which in essence is a collection of my reflections after taking the CEA Intro to EA Virtual Program course this past summer. You can find additional previous parts here: part 2 & part 3. This post is for Chapter 4: Our Final Century? in the EA Handbook.
My awesome facilitator during the EA virtual program course (and the first person I’ve ever exchanged EA thoughts with) was kind enough to edit and help me further enrich this post. Thank you @NBjork!
Let’s get to it:
As mentioned in the last chapter about radical empathy, helping the world can look like saving lives, extending lives, and/or reducing suffering. That help should be the most effective, regardless of geographical, species, or time constraints. In this chapter the conversation takes a turn to focus more on existential risks, thereby the focus is on saving and extending lives. One might understand existential risks as being the potential to continue to improve the world, for if we don’t exist anymore, our options for helping have already been extinguished. For this reason, reducing existential risk is a primary moral imperative for many individuals and organizations. Our first priority is to survive and so long as that’s the case there should be some margin of hope to continue to save lives, extend lives, and reduce suffering. Other things we care about (justice, equality, art, culture, environment, etc) are all things that would have a greater than zero chance to flourish to unprecedented levels in the future, but the first requirement is that an existential risk does not transpire first.
Naturally we should try to avoid tragedies in the world because often they end/shorten many lives and cause great suffering. However, the benchmark for something to be considered a tragedy is very subjective and can have an extraordinary range of impact. A mass shooting is a tragedy, but an earthquake is as well. So is war, genocide, a pandemic, starvation and even suicide. That means that even though a tragedy is certainly a negative, the magnitude of that negativity varies. Existential risk supersedes tragedy. The appropriate way to think of extinction is not by identifying it as a tragedy, but by thinking of it as the equivalent of the “final tragedy”.
One way to categorize existential risks is by those that are natural vs human caused. As it stands, human caused risks are significantly higher than natural extinction risks. Furthermore, just a very small number of human decision-makers have the power to destroy the entire world. This has been true since the 1950’s and the advent of nuclear weapons.
Within the “scale, neglectedness and solvability” framework, many EA organizations agree that safeguarding the future is the highest goal. However, that is not always easy to show using quantitative results. Longtermist interventions “may” save lives if future events take place in the way we expect, but if things transpire differently, it’s possible it was wasted effort. For example, say if we never experience a dangerous AI, because 10% of the AI safety work that was done worldwide was responsible for eliminating such a risk, in hindsight, the other 90% of our effort was a waste and we should have expended that energy elsewhere. Unfortunately we can’t really say today what part of our x-risk efforts are not going to be impactful, so we are stuck in this dilemma. In contrast, most interventions helping today can correlate quite closely to how many lives they actually help with clear and straightforward evidence. For such interventions 100% of the effort put into them is impactful, the question is of the degree to which they’re impactful.
Marginal impact speaks to your own personal return created in your work. An organization may be making impact X, and when you join it, you add to it your own marginal impact (Y), and the outcome of X + Y = Z. Now X may be huge, or it may be little, but it’s not the only thing that should be considered. Z should be considered to a greater degree. So let’s say for another organization their impact is A. Your impact is still Y, but when combined A + Y = B. Don’t simply compare the X and A values of two separate organizations, also compare (and with a larger weight) the end result of your impact as well. What is best, the final output of Z, or B?
To figure out Z or B quantitatively, there are a few different frameworks that analyze scale, neglectedness, solvability and personal fit and try and make the calculation of expected value as objective as possible
But I would like to remind everyone that some factors, especially concerning personal fit, can sometimes be very tricky to analyze using mathematical formulas and logic. Humans are not computers after all, so provide your human interests and motivations some space to reside and grow in combination with a quantitative analysis
Those who may be new to EA, as I am, may note that they have heard about climate change for most of the recent decades. From an individual to a national to a global level countless efforts related to improving climate change have been inserted into our daily lives and conversations. The future impacts of climate change can be without a doubt devastating, and (to use a word highlighted earlier in this post) tragic. Many climate change after-effects are already resulting in tragedies around the planet. However, as an EA the conversation isn’t just about tragedies and catastrophes, but their potential, ie about extinction. Those analysis for the most part reveal that the risks of climate change will be devastating and catastrophic, but not at the extinction threshold. For this reason in the EA community, climate change receives a lower risk designation compared to some other cause areas. It is acknowledged however that climate change will increase the other existential risks in our world.
Thank you all for taking the time to read through these reflections and feel free to leave any feedback you think relevant. I am especially open to resources that expand on these thoughts further!
Look out for the chapter 5 reflection post soon!