I think it’s an open question whether “even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future.” But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of “long-term technological/cultural path dependence/lock in,” rather than the GCR category (though that wasn’t the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13.
Nick_Beckstead
Conversation with Holden Karnofsky, Nick Beckstead, and Eliezer Yudkowsky on the “long-run” perspective on effective altruism
Re 1, yes it is philosophically controversial, but it also does speak to people with a number of different axiologies, as Brian Tomasik points out in another comment. One way to frame it is that it’s doing what separability does in my dissertation, but noticing that astronomical waste can run without making assumptions about the value of creating extra people. So you could think of it as running that argument with one less premise.
Re 2, yes it pushes in an unbounded utility function direction, and that’s relevant if your preferred resolution of Pascal’s Mugging is to have a bounded utility function. But this is also a problem for standard presentations of the astronomical waste argument. As it happens, I think you can run stuff like astronomical waste with bounded utility functions. Matt Wage has some nice stuff about this in his senior thesis, and I think Carl Shulman has a forthcoming post which makes some similar points. I think astronomical waste can be defended from more perspectives than it has been in the past, and it’s good to show that. This post is part of that project.
Re 3, I’d frame this way, “We use this all the time and it’s great in ordinary situations. I’m doing the natural extrapolation to strange situations.” Yes, it might break down in weird situations, but it’s the extrapolation I’d put most weight on.
A relatively atheoretical perspective on astronomical waste
I haven’t done a calculation on that, but I agree it’s important to consider. Regarding your calculation, a few of these factors are non-independent in a way that favors space colonization. Specifically:
Speeding up and slowing down are basically the same, so you should just treat that as one issue. Fitting everything you need into the spaceship and being able to build a civilization when you arrive are very closely related. Having your stuff survive the voyage and being able to build a civilization in a hostile environment are closely related. I would expect that if you can build a civilization when you get there, you can make sure your equipment functions during your voyage. Having the capacity to overcome the above problems and the non-existence of a presently unknown fatal obstacle is related to people later deciding whether to do this.
I also think they’re positively related in a more subtle way. There are people who know more about this than you or me who are saying that all these obstacles can be overcome. Conditional on one of these obstacles being possible to overcome (as they say), I have more confidence in their judgment, which makes me more confident that the other obstacles can be overcome.
Re: space cities, I haven’t looked into it much personally. Much of the discussion seems to assume building your civilization on a planet. My intuition is that space cities are probably easier.
Will we eventually be able to colonize other stars? Notes from a preliminary review
Cool, thanks for the update.
Improving disaster shelters to increase the chances of recovery from a global catastrophe
This post appears to be incomplete.
I agree that a choice of discount rate is fundamentally important in this context. If you did the standard thing of choosing a constant discount rate (e.g. 5%) and used that for all downstream benefits, even ones millions of years into the future, that would make helping future generations substantially less important. By emphasizing the distinction between pure discounting and discounting as a computational convenience, I did not mean to suggest that views about how to discount future benefits were unimportant.
I was distinguishing between two possible motives for discounting that I think clarifies what the purpose of discounting should be. The two purposes are hard to disentangle because they overlap in practice, but I think they diverge when it comes to distant future generations. I can try to explain more if you haven’t understood what the distinction I’m intending is. It’s the difference between “Benefits now are better just because that’s what people prefer” and “benefits now are better because they cause compounding growth, future people will be richer, the future is uncertain, etc”. If you go for the second answer, the conclusion isn’t something like “use a 5% discount rate for all benefits, even ones a million years out”, but instead “use a discount rate that accurately reflects your beliefs about growth, uncertainty, marginal value of consumption, etc.” in the the distant future. For reasons I linked to in Hanson and Weitzman, that’s not what I expect. Briefly, constant exponential growth over million-year timescales is hard (but not impossible) to square with physics-imposed constraints on the resources we could have access to. And, as Weitzman argues, I believe uncertainty about future growth results in a form of discounting that looks more hyperbolic and less exponential in the long run. These differences are not very consequential over the next 50 years or something, but I believe they are very consequential when you consider the entire possible future of our species.
That last sentence would take more explaining than I have done in any work I’ve publicly written up, and it’s something I would like to get to in the future. I haven’t run into many people for whom this was the major sticking point for whether they accept the long-run perspective I defend. But if this is your sticking point and you think it would be for many economists, do let me know and I’ll consider prioritizing a better explanation.
I like to distinguish between pure discounting and discounting as a computational convenience. By “pure discounting,” I mean caring less about the very same benefit, which you’ll get with certainty in the future, than a benefit you can get now. I see this as a values question, and my preference is to have a 0% pure discount rate. One might discount as a computational convenience to adjust for returns on investment from having benefits arrive earlier, uncertainty about the benefits arriving, changes in future wealth, or other reasons.
When you are deciding how to discount, I find it easiest to think about the problem without any discounting of any kind (doing something like a classical utilitarian analysis) and explicitly think about the empirical effects. Then if you want to use discounting as a computational convenience, you can try to choose one that gives similar results to thinking about the problem without any kind of discounting.
Regarding the hypothetical richer kids vs. current kids, I agree that one should make adjustments for uncertainty about whether there will be future kids, diminishing marginal utility of consumption, and beliefs about future growth. I don’t think this is well-captured by a constant exponential discount rate into the distant future. There are a lot of reasons I think this. Two I can quickly link to are here (http://www.overcomingbias.com/2009/09/limits-to-growth.html) and here (http://www.sciencedirect.com/science/article/pii/S009506969891052X).
I might be able to respond better if you told me how you think an appropriate treatment of discounting might affect the conclusions that Carl and I drew.
Yes, discount rates are an important thing to discuss here. I briefly discuss them on pp. 63-64 of my dissertation (http://www.nickbeckstead.com/research). I endorse using discount rates on a case-by-case basis as a convenience for calculation, but count harms and benefits as, in themselves and apart from their consequences, equally important whenever they occur.
For further articulation of similar perspectives I recommend:
Cowen, T. and Parfit, D. (1992). Justice Between Age Groups and Generations, chapter Against the Social Discount Rate, pages 144–161. Yale University Press, New Haven.
and
I broadly agree with Carl’s comment, though I have less of an opinion about the specifics of how you have done your learning grants. Part of your question may be, “Why would you do this if we’re already doing it?” I believe that strategic cause selection is an enormous issue and we have something to contribute. In this scenario, we certainly would want to work with you and like-minded organizations.
We think many non-human animals, artificial intelligence programs, and extraterrestrial species could all be of moral concern, to degrees varying based on their particular characteristics but without species membership as such being essential. Humanity is used alternately in the text with “civilization,” a civilization for which humanity is currently in the driver’s seat.
After thinking about this later, I noticed that one of my claims was wrong. I said:
> Though I’m not particularly excited about refuges, they might be a good test case. I think that if you had this 5N view, refuges would be obviously dumb but if you had the view that I defended in my dissertation then refuges would be interesting from a conceptual perspective.
But then I ran some numbers and this no longer seemed true. If you assumed a population of 10B, an N of 5, a cost of your refuge of $1B, that your risk of doom was 1%, and that your refuge could cut out a thousandth of that 1%, you get a cost per life-equivalent saved of $2000 (with much more favorable figures if you assume higher risk and/or higher refuge effectiveness). So a back-of-the-envelope calculation would suggest that, contrary to what I said, refuges would not be obviously dumb if you had the 5N view. (Link to back-of-envelope calc: https://docs.google.com/spreadsheets/d/1RRlj1sZpPJ8hr-KvMQy5R8NayA3a58EhLODXPu4NgRo/edit#gid=1176340950 .)
My best current guess is that building refuges wouldn’t be this effective at reducing existential risk, but that was after I looked into the issue a bit. I was probably wrong to think that Holden’s 5N heuristic would have ruled out refuges ex ante. (Link to other discussion of refuges: /ea/5r/improving_disaster_shelters_to_increase_the/ .)