A Long-run perspective on strategic cause selection and philanthropy
Co-written by Nick Beckstead and Carl Shulman
Introduction
A philanthropist who will remain anonymous recently asked us about what we would do if we didn’t face financial constraints. We gave a detailed answer that we thought we might as well share with others, who may also find our perspective interesting. We gave the answer largely in hope of creating some interest in our way of thinking about philanthropy and some of the causes that we find interesting for further investigation, and because we thought the answer would be fruitful for conversation.Our honest answer to your question
Our honest answer to your question is that we would systematically examine a wide variety of causes and opportunities with the intention of identifying the ones which could use additional money and talent to produce the best long-run outcomes. This would look a lot like setting up a major foundation—which is unsurprising, given that many people in this situation do set up foundations—so we will concentrate on the distinguishing or less typical features of our approach:Unlike many foundations, we would place a great deal of emphasis on selecting the highest impact program areas, rather than selecting program areas for other reasons and working hardest to find the best opportunities within those areas. Like GiveWell, we believe that the choice of program areas may be one of the most important decisions a major philanthropist makes and is consistently underemphasized.
We would invest heavily in learning, funding systematic examination of the spectrum of opportunities, and the transparent publication of our process and findings.
In addition to sharing information about giving opportunities, we would share detailed information about talent gaps, encouraging people with the right abilities to seek out opportunities in promising areas that are constrained by people rather than money.
We would measure impact primarily in terms of very long-run positive consequences for humanity, as outlined in Nick’s PhD thesis.
We would be skeptical of our intuitions, and check them through such means as external review, the collection of track records for our predictions, structured evaluations, and the use of simple and sophisticated methods of aggregating and improving on expert opinion (e.g. the forecasting training and aggregation methods developed by Philip Tetlock, calibration training, prediction markets, and anonymous surveys of appropriate experts).
Briefly,
We believe that maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity. We think this has significant practical implications when making trade-offs between short-term welfare and the broad functioning of society, our ability to face major global challenges and opportunities, and increasing society’s resilience to global catastrophes.
Five causes we are interested in investigating first include immigration reform, methods for improved forecasting, an area we call “philanthropic infrastructure,” catastrophic risks to humanity, and research integrity. These would be areas for investigation and experimentation, and we would pursue them in the short run primarily for the sake of gaining information about how attractive they are in comparison with other areas. There many other causes we would like to investigate early on, and would begin investigating those causes less deeply and in parallel with our investigations of the causes we are most enthusiastic about. We’d be happy to discuss the other causes with you as well.
Is the long run actionable in the short run?
As just mentioned, we believe that maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity, and that this has strategic implications for people aiming to maximize good accomplished with their resources. We think these implications are significant when choosing between causes or program areas, and less significant when comparing opportunities within program areas.There is a lot of detail behind this perspective and it is hard to summarize briefly. But here is an attempt to quickly explain our reasoning:
We think humanity has a reasonable probability of lasting a very long time, becoming very large, and/or eventually enjoying a very high quality of life. This could happen through radical (or even moderate) technological change, if industrial civilization persists as long as agriculture has persisted (though upper limits for life on Earth are around a billion years), or if future generations colonize other regions of space. Though we wouldn’t bet on very specific details, we think some of these possibilities have a reasonable probability of occurring.
Because of this, we think that, from an impartial perspective, almost all of the potential good we can accomplish comes through influencing very long-run outcomes for humanity.
We believe long-run outcomes may be highly sensitive to how well humanity handles key challenges and opportunities, especially challenges from new technology, in the next hundred years or so.
We believe that (especially with substantial resources) we could have small but significant positive impacts on how effectively we face these challenges and opportunities, and thereby affect expected long-run outcomes for humanity.
We could face these challenges and opportunities more effectively by preparing for specific challenges and opportunities (such as nuclear security and climate change in the past and present, and advances in synthetic biology and artificial intelligence in the future), or by enhancing humanity’s general capacities to deal with these challenges and opportunities when we face them (through higher rates of economic growth, improved political coordination, improved use of information and decision-making for individuals and groups, and increases in education and human capital).
We believe that this perspective diverges from the recommendations of a more short-run focus in a few ways.
First, when we consider attempts to prepare for global challenges and opportunities in general, we weigh such factors as economic output, log incomes, education, quality-adjusted life-years (QALYs), scientific progress, and governance quality differently than if we would if we put less emphasis on long-run outcomes for humanity. In particular, a more short-term focus would lead to a much stronger emphasis on QALYs and log incomes, which we suspect could be purchased more cheaply through interventions targeting people in developing countries, e.g. through public health or more open migration. Attending to long-run impacts creates a closer contest between such interventions and those which increase economic output or institutional quality (and thus the quality of our response to future challenges and opportunities). Our perspective would place an especially high premium on intermediate goals such as the quality of forecasting and the transmission of scientific knowledge to policy makers, which are disproportionately helpful for navigating global challenges and opportunities.
Second, when there are opportunities for identifying specific major challenges or opportunities for affecting long-run outcomes for humanity, our perspective favors treating these challenges and opportunities with the utmost seriousness. We believe that reducing the risk of catastrophes with the potential to destroy humanity—which we call “global catastrophic risks” or sometimes “existential risks”—has an unusually clear and positive connection with long-run outcomes, and this is a reason we are unusually interested in problems in this area.
Third, the long-run perspective values resilience against permanent disruption or worsening of civilization over and above resilience to short-term catastrophe. From a long-run perspective, there is an enormous difference between a collapse of civilization followed by eventual recovery, versus a permanent collapse of civilization. This point has been made by philosophers like Derek Parfit (very memorably at the end of his book Reasons and Persons) and Peter Singer (in a short piece he wrote with Nick Beckstead and Matt Wage).
Five causes we would like to investigate more deeply
Immigration reform
What it is: By “immigration reform,” we mean loosening immigration restrictions in rich countries with stronger political institutions, especially for people who are migrating from poor countries with weaker political institutions. We include both efforts to allow more high-skill immigration and efforts to allow more immigration in general. Some people to talk to in this area include Michael Clemens, Lant Pritchett, and others at the Center for Global Development. Fwd.us and the Krieble Foundation are two examples of organizations working in this area.Why we think it is promising: Many individual workers in poor countries could produce much more economic value and better realize their potential in other ways if they lived in rich countries, meaning that much of the world’s human capital is being severely underutilized. This claim is unusually well supported by basic economic theory and the views of a large majority of economists. Many concerns have been raised, but we think the most plausible ones involve political feasibility and political and cultural consequences of migration.
Philanthropic infrastructure
What it is: By “philanthropic infrastructure,” we mean activities that expand the flexible capabilities of those trying to do good in a cause-neutral, outcome-oriented way. Some organizations in this area we are most familiar with include charity evaluator GiveWell, donation pledge organizations (Giving What We Can, The Life You Can Save, the Giving Pledge), and 80,000 Hours (an organization that provides information to help people make career choices that maximize their impact). There are many examples we are less familiar with, such as the Bridgespan Group and the Center for Effective Philanthropy. (Disclosure: Nick Beckstead is on the board of trustees for the Centre for Effective Altruism, which houses Giving What We Can, The Life You Can Save, and 80,000 Hours, though The Life You Can Save is substantially independent.)Why we think it is promising: We are interested in this area because we want to build up resources which are flexible enough to ultimately support the causes and opportunities that are later found to be the most promising, and because we see a lot of growth in this area and think early investments may result in more money and talent available for very promising opportunities later on.
Methods for improved forecasting
What it is: Forecasting is challenging, and very high accuracy is difficult to obtain in many of the domains of greatest interest. However, a number of methods have been developed to improve forecasting accuracy through training, aggregation of opinion, incentives, and other means. Some examples include expert judgment aggregation algorithms, probability and calibration training, and prediction markets. We are excited about recent progress in this area in a prediction tournament sponsored by IARPA, which Philip Tetlock’s Good Judgment Project is currently winning.Why we think it is promising: Improved forecasting could be useful in a wide variety of political and business contexts. Improved forecasting over a period of multiple years could improve overall preparedness for many global challenges and opportunities. Moreover, strong evidence of the superior performance of some methods of forecasting over others could help policymakers base decisions on the best available evidence. We currently have limited information about room for more funding for existing organizations in this area.
Global catastrophic risk
What it is: Opportunities in this area focus on identifying and mitigating specific threats of human extinction, such as large asteroid impact and tail risks of climate change and nuclear winter. Examples of interventions in this category include tracking asteroids (which has largely been completed for asteroids that threaten civilization, though not for comets), improving resilience of the food supply through cellulose-to-food conversion, disease surveillance (for natural or man-made pandemics), advocacy for non-proliferation of nuclear weapons, and research on other possible risks and methods for mitigating them. An unusual view we take seriously is that some of the most significant risks in this area will come from new technologies that may emerge this century, such as advanced artificial intelligence and advanced biological weapons. (We also believe technologies of this type have massive upside potential which must be thought about carefully as we think about the risks.) Notable defenders of views in this vicinity include Martin Rees, Richard Posner, and Nick Bostrom. (Disclosure: Nick Bostrom is the Director at the Future of Humanity Institute, where Nick Beckstead is a research fellow and Carl Shulman is a research associate.)Why we think it is promising: Progress in this area has a clear relationship with long-run outcomes for humanity. There have been some very good buys in this area in the past, such as early asteroid tracking programs. Apart from climate change, total foundation spending in this area is around 0.1%, and little of that carefully distinguishes between large catastrophes and catastrophes with the potential to significant change long-run outcomes for humanity.
Meta-research
What it is: We will make use of GiveWell’s explanation of the cause area here and here.Why we think it is promising: We believe that many improvements in meta-research can accelerate scientific progress and make it easier for non-experts to discern what is known in a field. We believe this is likely to systematically improve our ability to navigate global challenges and opportunities. From a long-term perspective the importance of different impacts of meta-research diverges from a short-term analysis because, e.g. the degree to which policymakers can understand the state of scientific knowledge at any given level of progress looms larger in comparison to simple acceleration of progress.
- Against prediction markets by May 12, 2018, 12:08 PM; 25 points) (
- Is there a good place to find the “what we know so far” of the EA movement? by Sep 29, 2019, 8:42 AM; 25 points) (
- Improving disaster shelters to increase the chances of recovery from a global catastrophe by Feb 19, 2014, 10:17 PM; 24 points) (
- A relatively atheoretical perspective on astronomical waste by Aug 6, 2014, 12:55 AM; 9 points) (
There are many object-level lines of evidence to discuss, but this is not the place for great detail (I recommend Nick Bostrom’s forthcoming book). One of the most information-dense is that that’s surveys sent to the top 100 most-cited individuals in AI (identified using Microsoft’s academic search tool) resulted in a median estimate comfortably within the century, including substantial and non-negligible probability for the next few decades. The results were presented at the Philosophy and Theory of AI conference earlier this year and are on their way to publication.
Expert opinion is not terribly reliable on such questions, and we should probably widen our confidence intervals (extensive research shows that naive individuals give overly narrow intervals), assigning more weight to AI surprisingly soon and surprisingly late than otherwise. We might also try to correct against a possible optimistic bias (which would bias towards shorter timelines and lower risk estimates).
The surveyed experts also assigned credences in very bad or existentially catastrophic outcomes that, if taken literally, would suggest that AI poses the largest existential risk (although some respondents may have interpreted the question to include comparatively lesser harms).
Extinction-level asteroid, volcanoes, and other natural catastrophes are relatively well-characterized and pose extremely low annual risk based on empirical evidence of past events. GiveWell’s shallow analysis pages discuss several of these, and the edited volume “Global Catastrophic Risks” has more on these and others.
Climate scientists and the IPCC have characterized the risk of conditions threatening human extinction as very unlikely conditional on nuclear winter or severe continued carbon emissions, i.e. these are far more likely to cause large economic losses and death than to permanently disrupt human civilization.
Advancing biotechnology may make artificial diseases intentionally engineered to cause human extinction by large and well-resourced biowarfare programs an existential threat, although there is a very large gap between the difficulty of creating a catastrophic pathogen and civilization-ending one.
An FHI survey of experts at an Oxford Global Catastrophic Risks conference asked participants to assign credences to the risk of various levels of harm from different sources in the 21st century, including over 1 billion deaths and extinction. Median estimates assigned greater credence to human extinction from AI than conventional threats including nuclear war or engineered pandemics, but greater credence to casualties of at least 1 billion from the conventional threats.
So the relative importance of AI is greater in terms of existential risk than global catastrophic risk, but seems at least comparable in the latter area as well.
What you’re describing sounds a lot like what we’re attempting to do with the Good Ventures-GiveWell partnership, excepting a few details. I wonder whether you also see them as similar and what you see as the notable differences?
Hi Cari,
I would certainly agree they are quite similar, and think that Good Ventures is closer to how we would do things than almost all existing foundations, and tremendously good news for the world.
This makes sense. Nick has worked at GiveWell. I have been following and interacting with GiveWell since its founding. We share access to much of our respective knowledge bases, surrounding intellectual communities, and an interest in effective altruism, so we should expect a lot of overlap in approaches to solve similar problems.
Growing the EA community’s capabilities and quality of decision-making, pursuing high value-of-information questions about the available philanthropic options, and similar efforts are robustly valuable.
It’s harder for me to pin down differences with GV because of my uncertainty about Good Ventures’ reasoning behind some of its choices. Posting conversations makes it easier to see what information GV has access to, but I feel I know a lot more about GW’s internal thinking than GV’s.
Relative to GiveWell, I think we may care more about protecting the long-term trajectory of civilization relative to short-term benefits. And, speaking for myself at least, I am more skeptical that optimizing for short-term QALYs or similar measures will turn out to be very close to optimizing for long-term metrics. I’m not sure about GV’s take on those questions.
At the tactical level, and again speaking for myself and not for Nick, based on my current state of knowledge I don’t see how GV’s ratio of learning-by-granting relative to granting to fund direct learning efforts is optimal for learning.
For example, GiveWell and Good Ventures now provide the vast majority of funding for AMF. I am not convinced that moving from $15 MM to $20 MM of AMF funding provides information close in value to what could be purchased if one spent $5 MM more directly on information-gathering. GiveWell’s main argument on this point has been inability to hire using cash until recently, but it seems to me that existing commercial and other services can be used to buy valuable knowledge.
I’ll mention a few examples that come to mind. ScienceExchange, a marketplace that connects funders and scientific labs willing to take on projects for hire, is being used by the Center for Open Science to commission replications of scientific studies of interest. Polling firms can perform polls and surveys, of relevant experts or of the general public or donors, for hire in a standardized fashion. Consulting firms with skilled generalists or industry experts can be commissioned at market rates to acquire data and perform analysis in particular areas. Professional fundraising firms could have been commissioned to try street fundraising or direct mail and the like for AMF to learn whether those approaches are effective for GiveWell’s top charities.
Also, in buying access to information from nonprofit organizations, it’s not easy for me to understand the relationship between the extent of access/information and the grant, e.g. why make a grant sufficient to hire multiple full time staff-years in exchange for one staff-day of time? I can see various reasons why one might do this, such as wariness from nonprofits about sharing potentially embarrassing information, compensating for extensive investments required to produce the information in the first place, testing RFMF hypotheses, and building a reputation, but given what I know now I am not confident that the price is right if some of these grants really are primarily about gaining information. [However, the grants are still relatively small compared to your overall resources, so such overpayment is not a severe problem if it is a problem.]
Zooming out back to the big picture, I’ll reiterate that we are very much on the same page and are great fans of GV’s work.
I broadly agree with Carl’s comment, though I have less of an opinion about the specifics of how you have done your learning grants. Part of your question may be, “Why would you do this if we’re already doing it?” I believe that strategic cause selection is an enormous issue and we have something to contribute. In this scenario, we certainly would want to work with you and like-minded organizations.
Hi Nick — I did not at all mean to imply, “Why would you do this if we’re already doing it?” I see enormous value in other people experimenting with strategic cause selection and was gratified to read this post. I simply was surprised that you didn’t mention that GiveWell Labs, by and large, is taking the approach you outlined, including investigating four of the five issue areas you mentioned. That made me think either we’re not communicating well enough about what we’re doing, which seems likely, or that you see the two approaches as more different than I do.
Thanks for the clarifications Cari, they definitely give a better picture of Good Ventures’ take on these questions.
“Both Good Ventures and GiveWell care deeply about protecting the long-term trajectory of civilization, and our research reflects this.”
It’s great to hear that news about Good Ventures. One thing I would add: I didn’t mean to imply that GiveWell places no significant weight on this, just that based on our conversations with various GiveWell staff that weight seem to be smaller (to a degree which varies depending on the staff member).
“We feel that we’re doing what we can in terms of paying for information directly.”
Glad to hear it.
“I hope that helps to clarify, and I also hope you and Nick will keep following and giving input on the GiveWell Labs effort, since it’s so closely aligned with your long-term thinking about strategic cause selection and philanthropy.”
We certainly will. As it happens, I just scheduled another meeting with GiveWell Labs yesterday.
Why do you think A.I. is imminent enough in the next century to be a comparable existential risk to the others you list?
Appreciating this thread.
Is there an online forum or group or email list where global catastrophic risk and responses is discussed seriously?
Here is one of the best places for discussion. The other is LessWrong. I hear FLI (futureoflife.org) also has a blog platform planned. CSER, GCRI, MIRI and GiveWell also provide updates on their progress from time to time,
I might well donate to this. You’ve got a good framework, which is that long-run impacts are important but tough to know. I agree with investigating all five of these topics and with changing institutions to address unknown future risks. That seems at least as likely to work as direct mitigation of known ones. Your comment on the relative importance of different kinds of meta-research for the far future also seems spot on.
Some smaller points:
I’m with you on immigration but for different reasons. I don’t see why increasing GDP is particularly great for maximizing long-run welfare since as Nick Bostrom says in his existential risk paper, what we really want to optimize for is safety. So my guess is that immigration’s biggest impact would be increasing good-faith cooperation between different countries to avoid dangerous unilateral initiatives, rather than boosting human capital.
http://www.nickbostrom.com/papers/unilateralist.pdf
Some other things I think might be worth looking into are
1. Not only foresight but methods for communicating whatever is found to policymakers, and in democratic countries, the public. There might arise situations where we can predict outcomes but only a few people know and are ineffective. I happen to be thinking here about embryo selection. In general, I hope that as Al Gore writes in his book “The Future,” we are able to “steer” and especially steer technological changes to suit current priorities instead of just having them drop into our laps out of nowhere. Or as Paul Christiano says, we should increase the influence of human values over the far future. This is contrast to Robin Hanson who has actually written that voter foresight is bad.
http://www.overcomingbias.com/2011/01/against-voter-foresight.html
2. Lowering barriers to international trade and maybe promoting democracy because democratic countries tend to be more peaceful and internationally cooperative. But there might already be a lot of money flowing toward this.
http://longnow.org/seminars/02012/oct/08/decline-violence/
3. Whether we can really expect, in the case of AI, any current actions to persist into whatever future world could create a potentially dangerous, self-sufficient AI civilization. In other words, we already face high uncertainty about the efficacy of altering the political landscape right now or in the near future. The “track” leading to AI seems hugely volatile, adding a whole new layer of haze. This suggests to me that no action is now justified on this.
Lastly, as a practical issue if you did make an organization I would hope it could avoid taking a clear stance on the transhumanist vs. bioconservative question, since for me that might be a deal breaker, in contrast to the above. Unfortunately this is why I don’t donate to FHI.
Hi, Your text mentions the importance of cause-neutrality but focuses on humanity, e.g. “maximizing good accomplished largely reduces to doing what is best in terms of very long-run outcomes for humanity.“ Why don’t you include any other species?
To explain where I’m coming from: To my knowledge, GiveWell and Good Ventures also focus on “humanity” and talk about “humanitarians” but I’m not familiar with any argument that shows why that focus makes sense (I’m grateful to be pointed to one). Of course, I don’t expect you to answer on behalf of GW or GV, and I should ask them directly in public, I just mention them to explain that I wonder the same thing about other organizations that write about similar topics.
To me, it makes much more sense to replace ‘humanity,’ in your text with ‘beings that can suffer’ or similar.
Thanks!
We think many non-human animals, artificial intelligence programs, and extraterrestrial species could all be of moral concern, to degrees varying based on their particular characteristics but without species membership as such being essential. Humanity is used alternately in the text with “civilization,” a civilization for which humanity is currently in the driver’s seat.
Thanks for the VERY detailed answer. Surprised to learn the IPCC gives such a low credence to climate change being an existential, rather than just a catastrophic threat. I guess that’ll teach me to take vague claims of existential threat from the press’ science reporting and the odd blog less seriously(!).
n.b. there’s a broken link here now, the link to the Peter signer article
This point has been made by philosophers like Derek Parfit (very memorably at the end of his book Reasons and Persons) and Peter Singer (in a short piece he wrote with Nick Beckstead and Matt Wage).
“short piece” should point to:
http://effective-altruism.com/ea/50/preventing_human_extinction/
[…] Nick, with support from Carl Shulman (a research advisor to 80,000 Hours), wrote a detailed answer: A long-run perspective on strategic cause selection and philanthropy. […]
[…] animal welfare above all else or dismiss it entirely. (Similar patterns apply to views on the moral significance of the far future.) By contrast, we give simultaneous serious consideration to reducing animal suffering, reducing […]
[…] post explicitly adopt a short-run perspective, although he is elsewhere on the record stating that long-run considerations should dominate. Finally, population ethics is a fraught subject and there are a large number of issues that are […]
[…] work with CEA had measurable effects on the funding and attention going into long-term philanthropy (including the superintelligence control problem), but the details are personal and can’t be […]
[…] “A Long-run Perspective on Strategic Cause Selection and Philanthropy,” by Nick Beckstead and Carl Shulman. […]
[…] Nick Beckstead and Carl Shulman explained, the long run is very important, if we care about humanity without much bias for the present. And […]
Interesting stuff, but disappointed you don’t talk about discount rates.
Yes, discount rates are an important thing to discuss here. I briefly discuss them on pp. 63-64 of my dissertation (http://www.nickbeckstead.com/research). I endorse using discount rates on a case-by-case basis as a convenience for calculation, but count harms and benefits as, in themselves and apart from their consequences, equally important whenever they occur.
For further articulation of similar perspectives I recommend:
Cowen, T. and Parfit, D. (1992). Justice Between Age Groups and Generations, chapter Against the Social Discount Rate, pages 144–161. Yale University Press, New Haven.
and
http://rationalaltruist.com/2013/02/22/four-flavors-of-time-discounting-i-endorse-and-one-i-do-not/
What do you mean by “using discount rates on a case-by-case basis as a convenience for calculation”?
I don’t find your dissertation discussion very convincing (but then I’m an economist). I worry a lot more about the existing real children with glass in their feet right now (or intestinal worms or malaria or malnutrition or whatever) than the hypothetical potential children of the future who don’t exist yet, and in any case when they do will live in a substantially wealthier society in which everyone has access to good quality footwear.
I like to distinguish between pure discounting and discounting as a computational convenience. By “pure discounting,” I mean caring less about the very same benefit, which you’ll get with certainty in the future, than a benefit you can get now. I see this as a values question, and my preference is to have a 0% pure discount rate. One might discount as a computational convenience to adjust for returns on investment from having benefits arrive earlier, uncertainty about the benefits arriving, changes in future wealth, or other reasons.
When you are deciding how to discount, I find it easiest to think about the problem without any discounting of any kind (doing something like a classical utilitarian analysis) and explicitly think about the empirical effects. Then if you want to use discounting as a computational convenience, you can try to choose one that gives similar results to thinking about the problem without any kind of discounting.
Regarding the hypothetical richer kids vs. current kids, I agree that one should make adjustments for uncertainty about whether there will be future kids, diminishing marginal utility of consumption, and beliefs about future growth. I don’t think this is well-captured by a constant exponential discount rate into the distant future. There are a lot of reasons I think this. Two I can quickly link to are here (http://www.overcomingbias.com/2009/09/limits-to-growth.html) and here (http://www.sciencedirect.com/science/article/pii/S009506969891052X).
I might be able to respond better if you told me how you think an appropriate treatment of discounting might affect the conclusions that Carl and I drew.
I think your choice of discount rate is going to fundamentally alter your investment decision, it’s not just some kind of marginal technical tweak.
In practice either you discount fairly heavily, as most public projects do, and end up putting most of your money into solving short-term suffering (as I think you should), or you discount lightly, and put most of your money into possible future catastrophic risk mitigation.
I don’t see how this is “computational convenience”—it’s fundamental.
I agree that a choice of discount rate is fundamentally important in this context. If you did the standard thing of choosing a constant discount rate (e.g. 5%) and used that for all downstream benefits, even ones millions of years into the future, that would make helping future generations substantially less important. By emphasizing the distinction between pure discounting and discounting as a computational convenience, I did not mean to suggest that views about how to discount future benefits were unimportant.
I was distinguishing between two possible motives for discounting that I think clarifies what the purpose of discounting should be. The two purposes are hard to disentangle because they overlap in practice, but I think they diverge when it comes to distant future generations. I can try to explain more if you haven’t understood what the distinction I’m intending is. It’s the difference between “Benefits now are better just because that’s what people prefer” and “benefits now are better because they cause compounding growth, future people will be richer, the future is uncertain, etc”. If you go for the second answer, the conclusion isn’t something like “use a 5% discount rate for all benefits, even ones a million years out”, but instead “use a discount rate that accurately reflects your beliefs about growth, uncertainty, marginal value of consumption, etc.” in the the distant future. For reasons I linked to in Hanson and Weitzman, that’s not what I expect. Briefly, constant exponential growth over million-year timescales is hard (but not impossible) to square with physics-imposed constraints on the resources we could have access to. And, as Weitzman argues, I believe uncertainty about future growth results in a form of discounting that looks more hyperbolic and less exponential in the long run. These differences are not very consequential over the next 50 years or something, but I believe they are very consequential when you consider the entire possible future of our species.
That last sentence would take more explaining than I have done in any work I’ve publicly written up, and it’s something I would like to get to in the future. I haven’t run into many people for whom this was the major sticking point for whether they accept the long-run perspective I defend. But if this is your sticking point and you think it would be for many economists, do let me know and I’ll consider prioritizing a better explanation.
Let me explain my position—first, I agree with rejecting a pure time preference, and instead doing discounting based primarily on expected growth in incomes.
For me, the expectation that in 50 years the average person could easily be twice as wealthy, leads to quite heavy discounting of investment to improve their welfare vs spending to alleviate suffering from extreme poverty right now.
It’s possible I haven’t thought this through thoroughly, and am explaining away my lack of enthusiasm for your choice of 5 causes to the neglect of the classic Givewell/GWWC choices. Perhaps there is something to do with efficacy there—that I’m unsure of the likely impact of funding immigration advocacy, forecasting, and more research.
[…] A Long-run Perspective on Strategic Cause Selection and Philanthropy, by Nick Beckstead and Carl Shulman, Effective-Altruism.com, November 5, 2013. […]
I see, thanks for your reply