Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
Hi — I’m Alex! I run the 80k headhunting service, which provides organisations with lists of promising candidates for their open roles.
You can give me (anonymous) feedback here: admonymous.co/alex_ht
Yeah, that seems right to me.
On doubling consumption though, if you can suggest a policy that increases growth consistently, eventually you might cause consumption to be doubled (at some later time consumption under the faster growth will be twice as much as it would have been with the slower growth). Do you mean you don’t think you could suggest a policy change that would increase the growth rate by much?
Great to hear this has been useful!
I think if is around 1 then yes, spreading longtermism probably looks better than accelerating growth. Though I don’t know how expensive it is to double someone’s consumption in the long-run.
Doubling someone’s consumption by just giving them extra money might cost $30,000 for 50 years=~$0.5million. It seems right to me that there are ways to reduce the discount rate that are much cheaper than half a million dollars for 13 basis points. Eg. some community building probably takes a person’s discount rate from around 2% to around 0% for less than half a million dollars.
I don’t know how much cheaper it might be to double someone’s consumption by increasing growth but I suspect that spreading longtermism still looks better for this value of .
How confident are you that is around 1? I haven’t looked into it and don’t know how much consensus there is.
I’ve written a summary here in case you haven’t seen it: https://forum.effectivealtruism.org/posts/CsL2Mspa6f5yY6XtP/existential-risk-and-growth-summary
What do you think absorbers might be in cases of complex cluelessness? I see that delaying someone on the street might just cause them to spend 30 seconds less procrastinating, but how might this work for distributing bednets, or increasing economic growth?
Maybe there’s an line of argument around nothing being counterfactual in the long-term - because every time you solve a problem someone else was going to solve it eventually. Eg. if you didn’t increase growth in some region, someone else would have 50 years later. And now you did it they won’t. But this just sounds like a weirdly stable system and I guess this isn’t what you have in mind
Thanks for writing this. I hadn’t though it about this explicitly and think it’s useful. The bite-sized format is great. A series of posts would be great too.
So you think the hazard rate might go from around 20% to around 1%? That’s still far from zero, and with enough centuries with 1% risk we’d expect to go extinct.
I don’t have any specific stories tbh, I haven’t thought about it (and maybe it’s just pretty implausible idk).
Not the author but I think I understand the model so can offer my thoughts:
1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.
The model is looking at general dynamics of risk from the production of new goods, and isn’t trying to look at AI in any kind of granular way. The timescales on which we see the inverted U-shape depend on what values you pick for different parameters, so there are different values for which the time axes would span decades instead of centuries. I guess that picking a different growth rate would be one clear way to squash everything into a shorter time. (Maybe this is pretty consistent with short/medium AI timelines, as they probably correlate strongly with really fast growth).
I think your point about AI messing up the results is a good one—the model says that a boom in growth has a net effect to reduce x-risk because, while risk is increased in the short-term, the long-term effects cancel that out. But if AI comes in the next 50-100 years, then the long-term benefits never materialise.
2. What do you think of Wei Dai’s argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?
Sure, maybe there’s a lock-in event coming in the next 20-200 years which we can either
Delay (by decreasing growth) so that we have more time to develop safety features, or
Make more safety-focussed (by increasing growth) so it is more likely to lock in a good state
I’d think that what matters is resources (say coordination-adjusted-IQ-person-hours or whatever) spent on safety rather than time that could available to be spent on safety if we wanted. So if we’re poor and reckless, then more time isn’t necessarily good. And this time spent being less rich also might make other x-risks more likely. But that’s a very high level abstraction, and doesn’t really engage with the specific claim too closely so keen to hear what you think.
3. What do you think of Eliezer Yudkowsky’s argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?
The model doesn’t say anything about this kind of granular consideration (and I don’t have strong thoughts of my own).
4. What do you think of Nick Bostrom’s urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).
In the model, risk depends on production of consumption goods, rather than the level of consumption technology. The intuition behind this is that technological ideas themselves aren’t dangerous, it’s all the stuff people do with the ideas that’s dangerous. Eg. synthetic biology understanding isn’t itself dangerous, but a bunch of synthetic biology labs producing loads of exotic organisms could be dangerous.
But I think it might make sense to instead model risk as partially depending on technology (as well as production). Eg. once we know how to make some level of AI, the damage might be done, and it doesn’t matter whether there are 100 of them or just one.
And the reason growth isn’t neutral in the model is that there are also safety technologies (which might be analogous to making the world more robust to black balls). Growth means people value life more so they spend more on safety.
5. Looking at Figure 7, my “story” for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I’m confused about why this happens; see the next point). Does this sound right? If not, I’m wondering if you could give a similar intuitive story.
Sounds right to me.
6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for δ_t might be the cause of this, but I’m not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).
The hazard rate does increase for the period that there is more production of consumption goods, but this means that people are now richer, earlier than they would have been so they value safety more than they would otherwise.
As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach “the end of growth” (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?
Hmm yeah, this seems like maybe the risk depends in part on the rate of change of consumption technologies—because if no new techs are being discovered, it seems like we might be safe from anthropogenic x-risk.
But, even if you believe that the hazard rate would decay in this situation, maybe what’s doing the work is that you’re imagining that we’re still doing a lot of safety research, and thinking about how to mitigate risks. So that the consumption sector is not growing, but the safety sector continues to grow. In the existing model, the hazard rate could decay to zero in this case.
I guess I’m also not sure if I share the intuition that the hazard rate would decay to zero. Sure, we don’t have the technology right now to produce AGI that would constitute an existential risk but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways? It seems plausible to me that if we kept our current level of technology and production then we’d have a non-trivial chance each year of killing ourselves off.
What’s doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it’s not but that if growth stopped we’d keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we’d eventually be safe?
Now done here. It’s a ~10 page summary that someone with college-level math can understand (though I think you could read it, skip the math, and get the general idea).
Ah yeah that makes sense. I think they seemed distinct to me because one seems like ‘buy some QALYS now before the singularity’ and the other seems like ‘make the singularity happen sooner’ (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I’m not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to—looking forward to reading it.
Pleased you liked it and thanks for the question. Here are my quick thoughts:
That kind of flourishing-education sounds a bit like Bostrom’s evaluation function described here: http://www.stafforini.com/blog/bostrom/
Or steering capacity described here: https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless
Unfortunately he doesn’t talk about how to construct the evaluation function, and steering capacity is only motivated by an analogy. I agree with you/Bostrom/Milan that there are probably some things that look more robustly good than others. It’s a bit unclear how to get these but something like :‘Build models of how the world works by looking to the past and then updating based on inside view arguments of the present/future. Then take actions that look good on most of your models’ seems vaguely right to me. Some things that look good to me are: investing, building the EA community, reducing the chance of catastrophic risks, spreading good values, getting better at forecasting, building models of how the world works
Adjusting our values based on them being difficult to achieve seems a bit backward to me, but I’m motivated by subjective preferences, and maybe it would make more sense if you were taking a more ethical/realist approach (eg. because you expect the correct moral theory to actually be feasible to implement).
Following that paper, I think growth might increase x-risk in the near-term (say ~100-200 years), and might decrease x-risk in the long-term (if the growth doesn’t come at the cost of later growth). I meant (1), but was thinking about the effect of x-risk in the near-term.
Again, nice clarification.
I didn’t want to make any strong claims about which interventions people should end up prioritising, only about which effects they should consider to choose interventions.
Yep I meant (1) - thanks for checking. Also, that post sounds great—let me know if you want me to look over a draft :)
Yep I agree (I frame this as ‘beware updating on epistemic clones’ - people who have your beliefs for the same reason as you). My point in bringing this up was just that the common-sense view isn’t obviously near-termist.
Nice, thanks
I also found this to be a great framing of absorbers and hadn’t really got this before. It’s an argument against ‘all actions we take have huge effects on the future’, and I’m not sure how to weigh them up against each other empirically. Like how would I know if the world was more absorber-y or more sensitive to small changes?
I think conception events are just one example and there a bunch of other examples of this, the general idea being that the world has systems which are complex, hard to predict and very sensitive to initial conditions. Eg. the weather and climate system (a butterfly flapping its wings in China causing hurricane in Texas). But these are cases of simple cluelessness where we have evidential symmetry.
My claim is that we are faced with complex cluelessness, where there are some kind of systematic effects going on. To apply this to conception events—imagine we changed conception events so that girls were much more likely to be conceived than boys (say because in the near-term that had some good effects eg. say women tended to be happier at the time). My intuition here is that there could be long-term effects of indeterminate sign (eg. from increased/decreased population growth) which might dominate the near-term effects. Does that match your intuition?
Ah ok. Can you say a bit more about why long-term-focused interventions don’t meet your standards for rigour? I guess you take speculation about long-term effects as Bayesian evidence, but only as extremely weak evidence compared to evidence about near-term effects. Is that right?
Nice, that’s well put. Do you think we can get any idea of longterm effects eg. (somewhere between −10,000 and +10,000, but tending towards the higher/lower end)?
Yeah that sounds like simple cluelessness. I still don’t get this point (whereas I like other points you’ve made). Why would we think the distributions are identical or the probabilities are exactly 50% when we don’t have evidential symmetry?
I see why you would not be sure of the long-term effects (not have an EV estimate), but not why you would have an estimate of exactly zero. And if you’re not sure, I think it makes sense to try to get more sure. But I think you guys think this is harder than I do (another useful answer you’ve given).
It’s pretty blank—something like this