Note: The Open Philanthropy Project was formerly known as GiveWell Labs. Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.
A popular idea in the effective altruism community is the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.
This belief is sometimes coupled with a belief that the most important goal of an altruist should be to reduce “existential risk”: the risk of an extreme catastrophe that causes complete human extinction (as, for example, a sufficiently bad pandemic—or extreme unexpected developments related to climate change—could theoretically do), and thus curtails large numbers of future generations.
We are often asked about our views on these topics, and this post attempts to lay them out. There is not complete internal consensus on these matters, so I speak for myself, though most staff members would accept most of what I write here. In brief:
I broadly accept the idea that the bulk of our impact may come from effects on future generations, and this view causes me to be more interested in scientific research funding, global catastrophic risk mitigation, and other causes outside of aid to the developing-world poor. (If not for this view, I would likely favor the latter and would likely be far more interested in animal welfare as well.) However, I place only limited weight on the specific argument given by Nick Bostrom in Astronomical Waste—that the potential future population is so massive as to clearly (in a probabilistic framework) dwarf all present-day considerations. More
I reject the idea that placing high value on the far future—no matter how high the value—makes it clear that one should focus on reducing the risks of catastrophes such as extreme climate change, pandemics, misuse of advanced artificial intelligence, etc. Even one who fully accepts the conclusions of “Astronomical Waste” has good reason to consider focusing on shorter-term, more tangible, higher-certainty opportunities to do good—including donating to GiveWell’s current top charities and reaping the associated flow-through effects. More
I consider “global catastrophic risk reduction” to be a promising area for a philanthropist. As discussed previously, we are investigating this area actively. More
As discussed previously, I believe that the general state of the world has improved dramatically over the past several hundred years. It seems reasonable to state that the people who made contributions (large or small) to this improvement have made a major difference to the lives of people living today, and that when all future generations are taken into account, their impact on generations following them could easily dwarf their impact in their own time.
I believe it is reasonable to expect this basic dynamic to continue, and I believe that there remains huge room for further improvement (possibly dwarfing the improvements we’ve seen to date). I place some probability on global upside possibilities including breakthrough technology, space colonization, and widespread improvements in interconnectedness, empathy and altruism. Even if these don’t pan out, there remains a great deal of room for further reduction in poverty and in other causes of suffering.
In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations. I see no obvious analytical flaw in this claim, and give it some weight. However, because the argument relies heavily on specific predictions about a distant future, seemingly (as far as I can tell) backed by little other than speculation, I do not consider it “robust,” and so I do not consider it rational to let it play an overwhelming role in my belief system and actions. (More on my epistemology and method for handling non-robust arguments containing massive quantities here.) In addition, if I did fully accept the reasoning of “Astronomical Waste” and evaluate all actions by their far future consequences, it isn’t clear what implications this would have. As discussed below, given our uncertainty about the specifics of the far future and our reasons to believe that doing good in the present day can have substantial impacts on the future as well, it seems possible that “seeing a large amount of value in future generations” and “seeing an overwhelming amount of value in future generations” lead to similar consequences for our actions.
Catastrophic risk reduction vs. doing tangible good
Many people have cited “Astronomical Waste” to me as evidence that the greatest opportunities for doing good are in the form of reducing the risks of catastrophes such as extreme climate change, pandemics, problematic developments related to artificial intelligence, etc. Indeed, “Astronomical Waste” seems to argue something like this:
For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative “Maximize expected aggregate utility!” can be simplified to the maxim “Minimize existential risk!”.
I have always found this inference flawed, and in my recent discussion with Eliezer Yudkowsky and Luke Muehlhauser, it was argued to me that the “Astronomical Waste” essay never meant to make this inference in the first place. The author’s definition of existential risk includes anything that stops humanity far short of realizing its full potential—including, presumably, stagnation in economic and technological progress leading to a long-lived but limited civilization. Under that definition, “Minimize existential risk!” would seem to potentially include any contribution to general human empowerment.
I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positive difference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future. A few brief arguments in support of this position:
I believe that the track record of “taking robustly strong opportunities to do ‘something good’ ” is far better than the track record of “taking actions whose value is contingent on high-uncertainty arguments about where the highest utility lies, and/or arguments about what is likely to happen in the far future.” This is true even when one evaluates track record only in terms of seeming impact on the far future. The developments that seem most positive in retrospect—from large ones like the development of the steam engine to small ones like the many economic contributions that facilitated strong overall growth—seem to have been driven by the former approach, and I’m not aware of many examples in which the latter approach has yielded great benefits.
I see some sense in which the world’s overall civilizational ecosystem seems to have done a better job optimizing for the far future than any of the world’s individual minds. It’s often the case that people acting on relatively short-term, tangible considerations (especially when they did so with creativity, integrity, transparency, consensuality, and pursuit of gain via value creation rather than value transfer) have done good in ways they themselves wouldn’t have been able to foresee. If this is correct, it seems to imply that one should be focused on “playing one’s role as well as possible”—on finding opportunities to “beat the broad market” (to do more good than people with similar goals would be able to) rather than pouring one’s resources into the areas that non-robust estimates have indicated as most important to the far future.
The process of trying to accomplish tangible good can lead to a great deal of learning and unexpected positive developments, more so (in my view) than the process of putting resources into a low-feedback endeavor based on one’s current best-guess theory. In my conversation with Luke and Eliezer, the two of them hypothesized that the greatest positive benefit of supporting GiveWell’s top charities may have been to raise the profile, influence, and learning abilities of GiveWell. If this were true, I don’t believe it would be an inexplicable stroke of luck for donors to top charities; rather, it would be the sort of development (facilitating feedback loops that lead to learning, organizational development, growing influence, etc.) that is often associated with “doing something well” as opposed to “doing the most worthwhile thing poorly.”
I see multiple reasons to believe that contributing to general human empowerment mitigates global catastrophic risks. I laid some of these out in a blog post and discussed them further in my conversation with Luke and Eliezer.
For one who accepts these considerations, it seems to me that:
It is not clear whether placing enormous value on the far future ought to change one’s actions from what they would be if one simply placed large value on the far future. In both cases, attempts to reduce global catastrophic risks and otherwise plan for far-off events must be weighed against attempts to do tangible good, and the question of which has more potential to shape the far future will often be a difficult one to answer.
If one sees few robustly good opportunities to “make a huge difference to the far future,” the best approach to making a positive far-future difference may be “make a small but robustly positive difference to the present.”
One ought to be interested in “unusual, outstanding opportunities to do good” even if they don’t have a clear connection to improving the far future.
With that said:
This line of reasoning is not the only or overwhelming consideration in our current top charity recommendations. As discussed in the previous section, we place some weight on the importance of the far future but believe it would be irrational to let our beliefs about it take on excessive weight in our decision-making. The possibility that arguments about the importance of the far future are simply mistaken, and that the best way to do good is to focus on the present, carries weight.
I also do not claim that the above reasoning should push all those interested in the far future into nearer-term, higher-certainty actions. People who are well-positioned to take on low-probability, high-upside projects aiming to make a huge difference—especially when their projects are robustly worthwhile and especially when their projects represent promising novel ideas—should do so. People who have formed the deep understanding necessary to evaluate such projects well should not take us to be claiming that their convictions are irrational given what they know (though we do believe some people form irrationally confident convictions based on speculative arguments). As GiveWell has matured, we’ve become (in my view) much better-positioned to take on such low-probability, high-upside projects; hence our launch of GiveWell Labs and our current investigations on global catastrophic risks. The better-informed we become, the more willing we will be to go out on a limb.
Global catastrophic risk reduction as a promising area for philanthropy
I see global catastrophic risk reduction as a promising area for philanthropy, for many of the reasons laid out in a previous post:
It is a good conceptual fit for philanthropy, which is seemingly better suited than other approaches to working toward diffused benefits over long time horizons.
Many global catastrophic risks appear to get little attention from philanthropy.
I place some (though not overwhelming) weight on the argument that the implications of a catastrophe for the far future could be sufficiently catastrophic and long-lasting that even a small mitigation could have huge value.
I believe that declaring global catastrophic risk reduction to be the clearly most important cause to work on, on the basis of what we know today, would not be warranted. A broad variety of other causes could be superior under reasonable assumptions. Scientific research funding may be far more important to the far future (especially if global catastrophic risks turn out to be relatively minor, or science turns out to be a key lever in mitigating them). Helping low-income people (including via our top charities) could be the better area to work in if our views regarding the far future are fundamentally flawed, or if opportunities to substantially mitigate global catastrophic risks turn out to be highly limited. Working toward better public policy could also have major implications for both the present and the future, and having knowledge of this area could be an important tool no matter what causes we end up working on. More generally, by exploring multiple promising areas, we create better opportunities for “unknown unknown” positive developments, and the discovery of outstanding giving opportunities that are difficult to imagine given our current knowledge. (We also will become more broadly informed, something we believe will be very helpful in pitching funders on the best giving opportunities we can find—whatever those turn out to be.)
The Moral Value of the Far Future
Link post
Note: The Open Philanthropy Project was formerly known as GiveWell Labs. Before the launch of the Open Philanthropy Project Blog, this post appeared on the GiveWell Blog. Uses of “we” and “our” in the below post may refer to the Open Philanthropy Project or to GiveWell as an organization. Additional comments may be available at the original post.
A popular idea in the effective altruism community is the idea that most of the people we can help (with our giving, our work, etc.) are people who haven’t been born yet. By working to lower global catastrophic risks, speed economic development and technological innovation, and generally improve people’s resources, capabilities, and values, we may have an impact that (even if small today) reverberates for generations to come, helping more people in the future than we can hope to help in the present.
This belief is sometimes coupled with a belief that the most important goal of an altruist should be to reduce “existential risk”: the risk of an extreme catastrophe that causes complete human extinction (as, for example, a sufficiently bad pandemic—or extreme unexpected developments related to climate change—could theoretically do), and thus curtails large numbers of future generations.
We are often asked about our views on these topics, and this post attempts to lay them out. There is not complete internal consensus on these matters, so I speak for myself, though most staff members would accept most of what I write here. In brief:
I broadly accept the idea that the bulk of our impact may come from effects on future generations, and this view causes me to be more interested in scientific research funding, global catastrophic risk mitigation, and other causes outside of aid to the developing-world poor. (If not for this view, I would likely favor the latter and would likely be far more interested in animal welfare as well.) However, I place only limited weight on the specific argument given by Nick Bostrom in Astronomical Waste—that the potential future population is so massive as to clearly (in a probabilistic framework) dwarf all present-day considerations. More
I reject the idea that placing high value on the far future—no matter how high the value—makes it clear that one should focus on reducing the risks of catastrophes such as extreme climate change, pandemics, misuse of advanced artificial intelligence, etc. Even one who fully accepts the conclusions of “Astronomical Waste” has good reason to consider focusing on shorter-term, more tangible, higher-certainty opportunities to do good—including donating to GiveWell’s current top charities and reaping the associated flow-through effects. More
I consider “global catastrophic risk reduction” to be a promising area for a philanthropist. As discussed previously, we are investigating this area actively. More
Those interested in related materials may wish to look at two transcripts of recorded conversations I had on these topics: a conversation on flow-through effects with Carl Shulman, Robert Wiblin, Paul Christiano, and Nick Beckstead and a conversation on existential risk with Eliezer Yudkowsky and Luke Muehlhauser.
The importance of the far future
As discussed previously, I believe that the general state of the world has improved dramatically over the past several hundred years. It seems reasonable to state that the people who made contributions (large or small) to this improvement have made a major difference to the lives of people living today, and that when all future generations are taken into account, their impact on generations following them could easily dwarf their impact in their own time.
I believe it is reasonable to expect this basic dynamic to continue, and I believe that there remains huge room for further improvement (possibly dwarfing the improvements we’ve seen to date). I place some probability on global upside possibilities including breakthrough technology, space colonization, and widespread improvements in interconnectedness, empathy and altruism. Even if these don’t pan out, there remains a great deal of room for further reduction in poverty and in other causes of suffering.
In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations. I see no obvious analytical flaw in this claim, and give it some weight. However, because the argument relies heavily on specific predictions about a distant future, seemingly (as far as I can tell) backed by little other than speculation, I do not consider it “robust,” and so I do not consider it rational to let it play an overwhelming role in my belief system and actions. (More on my epistemology and method for handling non-robust arguments containing massive quantities here.) In addition, if I did fully accept the reasoning of “Astronomical Waste” and evaluate all actions by their far future consequences, it isn’t clear what implications this would have. As discussed below, given our uncertainty about the specifics of the far future and our reasons to believe that doing good in the present day can have substantial impacts on the future as well, it seems possible that “seeing a large amount of value in future generations” and “seeing an overwhelming amount of value in future generations” lead to similar consequences for our actions.
Catastrophic risk reduction vs. doing tangible good
Many people have cited “Astronomical Waste” to me as evidence that the greatest opportunities for doing good are in the form of reducing the risks of catastrophes such as extreme climate change, pandemics, problematic developments related to artificial intelligence, etc. Indeed, “Astronomical Waste” seems to argue something like this:
I have always found this inference flawed, and in my recent discussion with Eliezer Yudkowsky and Luke Muehlhauser, it was argued to me that the “Astronomical Waste” essay never meant to make this inference in the first place. The author’s definition of existential risk includes anything that stops humanity far short of realizing its full potential—including, presumably, stagnation in economic and technological progress leading to a long-lived but limited civilization. Under that definition, “Minimize existential risk!” would seem to potentially include any contribution to general human empowerment.
I have often been challenged to explain how one could possibly reconcile (a) caring a great deal about the far future with (b) donating to one of GiveWell’s top charities. My general response is that in the face of sufficient uncertainty about one’s options, and lack of conviction that there are good (in the sense of high expected value) opportunities to make an enormous difference, it is rational to try to make a smaller but robustly positive difference, whether or not one can trace a specific causal pathway from doing this small amount of good to making a large impact on the far future. A few brief arguments in support of this position:
I believe that the track record of “taking robustly strong opportunities to do ‘something good’ ” is far better than the track record of “taking actions whose value is contingent on high-uncertainty arguments about where the highest utility lies, and/or arguments about what is likely to happen in the far future.” This is true even when one evaluates track record only in terms of seeming impact on the far future. The developments that seem most positive in retrospect—from large ones like the development of the steam engine to small ones like the many economic contributions that facilitated strong overall growth—seem to have been driven by the former approach, and I’m not aware of many examples in which the latter approach has yielded great benefits.
I see some sense in which the world’s overall civilizational ecosystem seems to have done a better job optimizing for the far future than any of the world’s individual minds. It’s often the case that people acting on relatively short-term, tangible considerations (especially when they did so with creativity, integrity, transparency, consensuality, and pursuit of gain via value creation rather than value transfer) have done good in ways they themselves wouldn’t have been able to foresee. If this is correct, it seems to imply that one should be focused on “playing one’s role as well as possible”—on finding opportunities to “beat the broad market” (to do more good than people with similar goals would be able to) rather than pouring one’s resources into the areas that non-robust estimates have indicated as most important to the far future.
The process of trying to accomplish tangible good can lead to a great deal of learning and unexpected positive developments, more so (in my view) than the process of putting resources into a low-feedback endeavor based on one’s current best-guess theory. In my conversation with Luke and Eliezer, the two of them hypothesized that the greatest positive benefit of supporting GiveWell’s top charities may have been to raise the profile, influence, and learning abilities of GiveWell. If this were true, I don’t believe it would be an inexplicable stroke of luck for donors to top charities; rather, it would be the sort of development (facilitating feedback loops that lead to learning, organizational development, growing influence, etc.) that is often associated with “doing something well” as opposed to “doing the most worthwhile thing poorly.”
I see multiple reasons to believe that contributing to general human empowerment mitigates global catastrophic risks. I laid some of these out in a blog post and discussed them further in my conversation with Luke and Eliezer.
For one who accepts these considerations, it seems to me that:
It is not clear whether placing enormous value on the far future ought to change one’s actions from what they would be if one simply placed large value on the far future. In both cases, attempts to reduce global catastrophic risks and otherwise plan for far-off events must be weighed against attempts to do tangible good, and the question of which has more potential to shape the far future will often be a difficult one to answer.
If one sees few robustly good opportunities to “make a huge difference to the far future,” the best approach to making a positive far-future difference may be “make a small but robustly positive difference to the present.”
One ought to be interested in “unusual, outstanding opportunities to do good” even if they don’t have a clear connection to improving the far future.
With that said:
This line of reasoning is not the only or overwhelming consideration in our current top charity recommendations. As discussed in the previous section, we place some weight on the importance of the far future but believe it would be irrational to let our beliefs about it take on excessive weight in our decision-making. The possibility that arguments about the importance of the far future are simply mistaken, and that the best way to do good is to focus on the present, carries weight.
I also do not claim that the above reasoning should push all those interested in the far future into nearer-term, higher-certainty actions. People who are well-positioned to take on low-probability, high-upside projects aiming to make a huge difference—especially when their projects are robustly worthwhile and especially when their projects represent promising novel ideas—should do so. People who have formed the deep understanding necessary to evaluate such projects well should not take us to be claiming that their convictions are irrational given what they know (though we do believe some people form irrationally confident convictions based on speculative arguments). As GiveWell has matured, we’ve become (in my view) much better-positioned to take on such low-probability, high-upside projects; hence our launch of GiveWell Labs and our current investigations on global catastrophic risks. The better-informed we become, the more willing we will be to go out on a limb.
Global catastrophic risk reduction as a promising area for philanthropy
I see global catastrophic risk reduction as a promising area for philanthropy, for many of the reasons laid out in a previous post:
It is a good conceptual fit for philanthropy, which is seemingly better suited than other approaches to working toward diffused benefits over long time horizons.
Many global catastrophic risks appear to get little attention from philanthropy.
I place some (though not overwhelming) weight on the argument that the implications of a catastrophe for the far future could be sufficiently catastrophic and long-lasting that even a small mitigation could have huge value.
I believe that declaring global catastrophic risk reduction to be the clearly most important cause to work on, on the basis of what we know today, would not be warranted. A broad variety of other causes could be superior under reasonable assumptions. Scientific research funding may be far more important to the far future (especially if global catastrophic risks turn out to be relatively minor, or science turns out to be a key lever in mitigating them). Helping low-income people (including via our top charities) could be the better area to work in if our views regarding the far future are fundamentally flawed, or if opportunities to substantially mitigate global catastrophic risks turn out to be highly limited. Working toward better public policy could also have major implications for both the present and the future, and having knowledge of this area could be an important tool no matter what causes we end up working on. More generally, by exploring multiple promising areas, we create better opportunities for “unknown unknown” positive developments, and the discovery of outstanding giving opportunities that are difficult to imagine given our current knowledge. (We also will become more broadly informed, something we believe will be very helpful in pitching funders on the best giving opportunities we can find—whatever those turn out to be.)