Thanks Greg - I asked and it turned out I had one remaining day to make edits to the paper, so I’ve made some minor ones in a direction you’d like, though I’m sure they won’t be sufficient to satisfy you.
Going to have to get back on with other work at this point, but I think your arguments are important, though the ‘bait and switch’ doesn’t seem totally fair—e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.
William_MacAskill
Thanks for this, Greg.
”But what is your posterior? Like Buck, I’m unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million.”
I’m surprised this wasn’t clear to you, which has made me think I’ve done a bad job of expressing myself.
It’s the former, and for the reason of your explanation (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the blog post I describe what I call the outside-view arguments, including that we’re very early on, and say: “My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.”
I’m going to think more about your claim that in the article I’m ‘hiding the ball’. I say in the introduction that “there are some strong arguments for thinking that this century might be unusually influential”, discuss the arguments that I think really should massively update us in section 5 of the article, and in that context I say “We have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years.” Then in the conclusion I say: “There are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too.” That seemed clear to me, but I should judge clarity by how readers interpret what I’ve written.
Actually, rereading my post I realize I had already made an edit similar to the one you suggest (though not linking to the article which hadn’t been finished) back in March 2020:
”[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.
The unconditional prior probability over whether this is the most influential century would then depend on one’s priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that’s the claim we can focus our discussion on.
It’s worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas (‘Self-location and objective chance’ (ms)): “A rational agent’s priors locate him uniformly at random within each possible world.” I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don’t need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population—the question of whether we’re at the most influential time does not require us to get into debates over anthropics.]”
Thanks, Greg. I really wasn’t meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so I’m sorry if I did.
”It seems more reasonable to say ‘our’ prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.”
I agree with this (though see for the discussion with Lukas for some clarification about what we’re talking about when we say ‘priors’, i.e. are we building the fact that we’re early into our priors or not.).
Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.
I do think we should update away from those priors, and I think that update is sufficient to make the case for longtermism. I agree that the location in time that we find ourselves in (what I call ‘outside-view arguments’ in my original post) is sufficient for a very large update.
Practically speaking, thinking through the surprisingness of being at such an influential time made me think:Maybe I was asymmetrically assessing evidence about how high x-risk is this century. I think that’s right; e.g. I now don’t think that x-risk from nuclear war is as high as 0.1% this century, and I think that longtermist EAs have sometimes overstated the case in favour.
If we think that there’s high existential risk from, say, war, we should (by default) think that such high risk will continue into the future.
It’s more likely that we’re in a simulation
It also made me take more seriously the thoughts that in the future there might be non-extinction-risk mechanisms for producing comparably enormous amounts of (expected) value, and that maybe there’s some crucial consideration(s) that we’re currently missing such that our actions today are low-expected-value compared to actions in the future.
“Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it’s not super unlikely that early people are the most influential.”
I strongly agree with this. The fact that under a mix of distributions, it becomes not super unlikely that early people are the most influential, is really important and was somewhat buried in the original comments-discussion.
And then we’re also very distinctive in other ways: being on one planet, being at such a high-growth period, etc.
Thanks, I agree that this is key. My thoughts:
I agree that our earliness gives a dramatic update in favor of us being influential. I don’t have a stable view on the magnitude of that.
I’m not convinced that the negative exponential form of Toby’s distribution is the right one, but I don’t have any better suggestions
Like Lukas, I think that Toby’s distribution gives too much weight to early people, so the update I would make is less dramatic than Toby’s
Seeing as Toby’s prior is quite sensitive to choice of reference-class, I would want to choose the reference class of all observer-moments, where an observer is a conscious being. This means we’re not as early as we would say if we used the distribution of Homo sapiens, or of hominids. I haven’t thought about what exactly that means, though my intuition is that it means the update isn’t nearly as big.
So I guess the answer to your question is ‘no’: our earliness is an enormous update, but not as big as Toby would suggest.
“If we’re doing things right, it shouldn’t matter whether we’re building earliness into our prior or updating on the basis of earliness.”
Thanks, Lukas, I thought this was very clear and exactly right.
“So now we’ve switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn’t seem much easier than making a guess about P(X in H | X in E), and it’s not obvious whether our intuitions here would lead us to expect more or less influentialness.”
That’s interesting, thank you—this statement of the debate has helped clarify things for me. It does seem to me that doing the update - going via P(X in E | X in H) rather than directly trying to assess P(X in H | X in E) - is helpful, but I’d understand the position of someone who wanted just to assess P(X in H | X in E) directly.I think it’s helpful to assess P(X in E | X in H) because it’s not totally obvious how one should update on the basis of earliness. The arrow of causality and the possibility of lock-in over time definitely gives reasons in favor of influential people being earlier. But there’s still the big question of how great an update that should be. And the cumulative nature of knowledge and understanding gives reasons in favor thinking that later people are more likely to be more influential.
This seems important to me because, for someone claiming that we should think that we’re at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does. To me at least, that’s a striking fact and wouldn’t have been obvious before I started thinking about these things.
This comment of mine in particular seems to have been downvoted. If anyone were willing, I’d be interested to understand why: is that because (i) the tone is off (seemed too combative?); (ii) the arguments themselves are weak; (iii) it wasn’t clear what I’m saying; (iv) it wasn’t engaging with Buck’s argument; (v) other?
Yeah, I do think the priors-based argument given in the post was poorly stated, and therefore led to unnecessary confusion. Your suggestion is very reasonable, and I’ve now edited the post.
Comment (5/5)
Smaller comments
I agree that one way you can avoid thinking we’re astronomically influential is by believing the future is short, such as by believing you’re in a simulation, and I discuss that in the blog post at some length. But, given that there are quite a number of ways in which we could fail to be at the most influential time (perhaps right now we can do comparatively little to influence the long-term, perhaps we’re too lacking in knowledge to pick the right interventions wisely, perhaps our values are misguided, perhaps longtermism is false, etc), it seems strange to put almost all of the weight on one of those ways, rather than give some weight to many different explanations.
“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormously influential time—as I say in the blog post and the comments, I endorse those arguments! I think we should update massively away from our prior, in particular on the basis of the current rate of economic growth. But for direct philanthropy to beat patient philanthropy, being at a hugely influential time isn’t enough. Even if this year is hugely influential, next year might be even more influential again; even if this century is hugely influential, next century might be more influential again. And if that’s true then—as far as the consideration of wanting to spend our philanthropy at the most influential times goes—then we have a reason for saving rather than donating right now.
You link to the idea that the Toba catastrophe was a bottleneck for human populations. Though I agree that we used to be more at-risk from natural catastrophes than we are today, more recent science has cast doubt on that particular hypothesis. From The Precipice: “the “Toba catastrophe hypothesis” was popularized by Ambrose (1998). Williams (2012) argues that imprecision in our current archeological, genetic and paleoclimatological techniques makes it difficult to establish or falsify the hypothesis. See Yost et al. (2018) for a critical review of the evidence. One key uncertainty is that genetic bottlenecks could be caused by founder effects related to population dispersal, as opposed to dramatic population declines.”
Ambrose, S. H. (1998). “Late Pleistocene Human Population Bottlenecks, Volcanic Winter, and Differentiation of Modern Humans.” Journal of Human Evolution, 34(6), 623–51
Williams, M. (2012). “Did the 73 ka Toba Super-Eruption have an Enduring Effect? Insights from Genetics, Prehistoric Archaeology, Pollen Analysis, Stable Isotope Geochemistry, Geomorphology, Ice Cores, and Climate Models.” Quaternary International, 269, 87–93.
Yost, C. L., Jackson, L. J., Stone, J. R., and Cohen, A. S. (2018). “Subdecadal Phytolith and Charcoal Records from Lake Malawi, East Africa, Imply Minimal Effects on Human Evolution from the ∼74 ka Toba Supereruption.” Journal of Human Evolution, 116, 75–94.
(Comment 4⁄5)
The argument against patient philanthropy
“I sometimes hear the outside view argument used as an argument for patient philanthropy, which it in fact is not.”
I don’t think this works quite in the way you think it does.
It is true that, in a similar vein to the arguments I give against being at the most influential time (where ‘influential’ is a technical term, excluding investing opportunities), you can give an outside-view argument against now being the time at which you can do the most good tout court. As a matter of fact, I believe that’s true: we’re almost certainly not at the point in time, in all history, at which one can do the most good by investing a given unit of resources to donate at a later date. That time could plausibly be earlier than now, because you get greater investment returns, or plausibly later than now, because in the future we might have a better understanding of how to structure the right legal instruments, specify the constitution of one’s foundation, etc.
But this is not an argument against patient philanthropy compared to direct action. In order to think that patient philanthropy is the right approach, you do not need to make the claim that now is the time, out of all times, when patient philanthropy will do the most expected good. You just need the claim that, currently, patient philanthropy will do more good than direct philanthropy. This is a (much, much) weaker claim to make.
And, crucially, there’s an asymmetry between patient philanthropy and direct philanthropy.
Suppose there are 70 time periods at which you could spend your philanthropic resources (every remaining year of your life, say), and that the scale of your philanthropy is small (so that diminishing returns can be ignored). Then, if the expected cost-effectiveness of the best opportunities varies substantially over time, there will be just one point in time at which your philanthropy will have the most impact, and you should try to max out your philanthropy at that time period, donating all your philanthropy at that time if you can. (Perhaps that isn’t quite possible because you are limited in how much you can take out debt against future income; but still, the number of times you will donate in your life will be small.) So, in 69 out of 70 time periods (or, even if you need to donate a few times, ~67 out of 70 time periods), you should be saving rather than donating. That’s why direct philanthropy needs to make the claim that now is the most, or at least one of the most, potentially-impactful times, out of the relevant time periods when one could donate, whereas patient philanthropy doesn’t.
Second, the inductive argument against now being the optimal time for patient philanthropy is much weaker than the inductive argument against now being the most influential time (in the technical sense of ’influential). It’s not clear there is an inductive argument against now being the optimal time for patient philanthropy: there’s at least a plausible argument that, on average, every year the value of patient philanthropy decreases, because one loses one extra year of investment returns. Combined with the fact that one cannot affect the past (well, putting non-causal decision theories to the side ;) ), this gives an argument for thinking that now will be higher-impact for patient philanthropy than all future times.
Personally, I don’t think that argument quite works, because you can still mess up patient philanthropy, so maybe future people will do patient philanthropy better than we do. But it’s an argument that’s much more compelling in the case of patient philanthropy than it is for the influentialness of a time.
(Comment 3⁄5)
Earliness
“Will’s resolution is to say that in fact, we shouldn’t expect early times in human history to be hingey, because that would violate his strong prior that any time in human history is equally likely to be hingey.”
I don’t see why you think I think this. (I also don’t know what “violating” a prior would mean.)
The situation is: I have a prior over how influential I’m likely to be. Then I wake up, find myself in the early 21st century, and make a whole bunch of updates. This include updates on the facts that: I’m on one planet, I’m at a period of unusually high economic growth and technological progress, I *seem* to be unusually early on and can’t be very confident that the future is short. So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no. And I haven’t seen a quantitative argument, yet, for thinking that the argument is ‘yes’, whereas the inductive argument seems to give a positive argument for thinking ‘no’.
One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good? The case in favour of earlier is if you think that speeding up economic growth / technological progress is so good that the greater impact you’d have at earlier times outweighs the seemingly better opportunities we have today. But I don’t think you believe that, and at least the standard EA view is that the benefits of speed-up are small compared to x-risk reduction or other proportional impacts on the value of the long-run future.
(Comment 2⁄5)
The outside-view argument (in response to your first argument)
In the blog post, I stated the priors-based argument quite poorly—I thought this bit wouldn’t be where the disagreement was, so I didn’t spend much time on it. How wrong I was about that! For the article version (link), I tidied it up.
The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n.
This falls out of the self-sampling assumption, that a rational agent’s priors locate her uniformly at random within each possible world. If you reject this way of setting priors then, by modus tollens, you reject the self-sampling assumption. That’s pretty interesting if so!
On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future. Only that, a priori, the probability that we’re *both* in a very large future *and* one of the most influential people ever is very low. For that reason, there aren’t any implications from that argument to claims about the magnitude of extinction risk this century. We could be comparatively un-influential in many ways: if extinction risk is high this century but continues to be high for very many centuries; if extinction risk is low this century and will be higher in coming centuries; if extinction risk is any level and we can’t do anything about it, or are not yet knowledgeable enough to choose actions wisely, or if longtermism is false. (etc)
Separately, I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early. Building earliness into your prior means you’ve got to give up on the very-plausible-seeming self-sampling assumption; means you’ve got to treat the predicate ‘is most influential’ differently than other predicates; has technical challenges; and the case in favour seems to rely on a posteriori observations about how the world works, like those you give in your post.
(Comment 1⁄5)
Thanks so much for engaging with this, Buck! :)
I revised the argument of the blog post into a forthcoming article, available at my website (link). I’d encourage people to read that version rather than the blog post, if you’re only going to read one. The broad thrust is the same, but the presentation is better.
I’ll discuss the improved form of the discussion about priors in another comment. Some other changes in the article version:
I frame the argument in terms of the most influential people, rather than the most influential times. It’s the more natural reference class, and is more action-relevant.
I use the term ‘influential’ rather than ‘hingey’. It would be great if we could agree on terminology here; as Carl noted on my last post, ‘hingey’ could make the discussion seem unnecessarily silly.
I define ‘influentialness’ (aka ‘hingeyness’) in terms of ‘how much expected good you can do’, not just ‘how much expected good you can do from a longtermist perspective’. Again, that’s the more natural formulation, and, importantly, one way in which we could fail to be at the most influential time (in terms of expected good done by direct philanthropy) is if longtermism is false and, say, we only discover the arguments that demonstrate that in a few decades’ time.
The paper includes a number of graphs, which I think helps make the case clearer.
I don’t discuss the simulation argument. (Though that’s mainly for space and academic normalcy reasons—I think it’s important, and discuss it in the blog post.)
Something I forgot to mention in my comments before: Peter Watson suggested to me it’s reasonably likely that estimates of climate sensitivity will be revised upwards for the next IPCC, as the latest generation of models are running hotter. (e.g. https://www.carbonbrief.org/guest-post-why-results-from-the-next-generation-of-climate-models-matter, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782 - “The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values exceeding the CMIP5 maximum (Figure 1a). Specifically, the range has increased from 2.1–4.7 K in CMIP5 to 1.8–5.6 K in CMIP6.”) This could drive up the probability mass over 6 degrees in your model by quite a bit, so could be worth doing a sensitivity analysis on that.
How much do you worry that MIRI’s default non-disclosure policy is going to hinder MIRI’s ability to do good research, because it won’t be able to get as much external criticism?
Suppose you find out that Buck-in-2040 thinks that the work you’re currently doing is a big mistake (which should have been clear to you, now). What are your best guesses about what his reasons are?
What’s the biggest misconception people have about current technical AI alignment work? What’s the biggest misconception people have about MIRI?
I agree that Gordon deserves great praise and recognition!
One clarification: My discussion of Zhdanov was based on Gordon’s work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to cite him, which was a major oversight on my part, and I feel really bad about that. (I’ve apologized to him about this.) So that discussion shouldn’t be seen as independent convergence.