âItâs not clear why youâd think that the evidence for x-risk is strong enough to think weâre one-in-a-million, but not stronger than that.â This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that youâre one-in-a-million, it neednât push you all the way to thinking that youâre one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that weâre (in expectation at least) at an enormously influential timeâas I say in the blog post and the comments, I endorse those arguments! I think we should update massively away from our prior, in particular on the basis of the current rate of economic growth. (My emphasis)
Asserting an astronomically adverse prior, then a massive update, yet being confident youâre in the right ballpark re. orders of magnitude does look pretty fishy though. For a few reasons:
First, (in the webpage version you quoted) you donât seem sure of a given prior probability, merely that it is âastronomicalâ: yet astronomical numbers (including variations you note about whether to multiply by how many accessible galaxies there are or not, etc.) vary by substantially more than three orders of magnitudeâyou note two possible prior probabilities (of being among the million most influential people) of 1 in a million trillion (10^-18) and 1 in a hundred million (10^-8) - a span of 10 orders of magnitude.
It seems hard to see how a Bayesian update from this (seemingly) extremely wide prior would give a central estimate at a (not astronomically minute) value, yet confidently rule against values âonlyâ 3 orders of magnitude higher (a distance a ten millionth the width of this implicit span in prior probability). [It also suggests the highest VoI is to winnow this huge prior range, rather than spending effort evaluating considerations around the likelihood ratio]
Second, whatever (very) small value we use for our prior probability, getting to non-astronomical posteriors implies likelihood ratios/âBayes factors which are huge. From (say) 10^-8 to 10^-4 is a factor of 10 000. As you say in your piece, this is much much stronger than the benchmark for decisive evidence of ~100. It seems hard to say (e.g.) evidence from the rate of economic growth is âdecisiveâ in this sense, and so it is hard to see how in concert with other heuristic considerations you get 10-100x more confirmation (indeed, your subsequent discussion seems to supply many defeaters exactly this). Further, similar to worries about calibration out on the tail, it seems unlikely many of us can accurately assess LRs > 100 which are not direct observations within orders of magnitude.
Third, priors should be consilient, and can be essentially refuted by posteriors. A prior that get surprised to the tune of a 1-in-millions should get hugely penalized versus any alternative (including naive intuitive gestalts) which do not. It seems particularly costly as non-negligible credences in (e.g.) nuclear winter, the industrial revolution being crucial etc. are facially represent this prior being surprised by â1 in large Xâ events at a rate much greater than 1/âX.
To end up with not-vastly lower posteriors than your interlocutors (presuming Buckâs suggestion of 0.1% is fair, and not something like 1/âmillion), it seems one asserts both a much lower prior which is mostly (but not completely) cancelled out by a much stronger update step. This prior seems to be ranging over many orders of magnitude, yet the posterior does notâyet it is hard to see where the orders of magnitude of better resolution are arising from (if we knew for sure the prior is 10^-12 versus knowing for sure it is 10^-8, shouldnât the posterior shift a lot between the two cases?)
It seems more reasonable to say âourâ prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.
Thanks, Greg. I really wasnât meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so Iâm sorry if I did.
âIt seems more reasonable to say âourâ prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.â
I agree with this (though see for the discussion with Lukas for some clarification about what weâre talking about when we say âpriorsâ, i.e. are we building the fact that weâre early into our priors or not.).
But what is your posterior? Like Buck, Iâm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 /â 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at âface valueâ.
From this on page 13 I guess a generous estimate (/âupper bound) is something like 1/â 1 million for the âamong most important million peopleâ:
[W]e can assess the quality of the arguments given in favour of the Time of Perils or Value Lock-in views, to see whether, despite the a priori implausibility and fishiness of HH, the evidence is strong enough to give us a high posterior in HH. It would take us too far afield to discuss in sufficient depth the arguments made in Superintelligence, or Pale Blue Dot, or The Precipice. But it seems hard to see how these arguments could be strong enough to move us from a very low prior all the way to significant credence in HH. As a comparison, a randomised controlled trial with a p-value of 0.05, under certain reasonable assumptions, gives a Bayes factor of around 3 in favour of the hypothesis; a Bayes factor of 100 is regarded as âdecisiveâ evidence. In order to move from a prior of 1 in 100 million to a posterior of 1 in 10, one would need a Bayes factor of 10 million â extraordinarily strong evidence.
I.e. a prior of ~ 1/â 100 million (which is less averse than others you moot earlier), and a Bayes factor < 100 (i.e. we should not think the balance of reason, all considered, is âdecisiveâ evidence), so you end up at best at ~1/â 1 million. If this argument is right, you can be âsuper confidentâ giving a credence of 0.1% is wrong (out by an ratio of >~ 1000, the difference between ~ 1% and 91%), and vice-versa.
Yet I donât think your credence on âthis is the most important centuryâ is 1/â 1 million. Among other things it seems to imply we can essentially dismiss things like short TAI timelines, Bostrom-Yudkowsky AI accounts etc, as these are essentially upper-bounded by the 1/â 1M credence above.*
So (presuming Iâm right and you donât place negligible credence on these things) Iâm not sure how these things can be in reflective equilibrium.
1: âAmong the most important million peopleâ and âthis is the most important centuryâ are not the same thing, and so perhaps one has a (much) higher prior on the latter than the former. But if the action really was here, then the precisification of âhinge of historyâ as the former claim seems misguided: âOh, this being the most important century could have significant credence, but this other sort-of related proposition nonetheless has an astronomically adverse priorâ confuses rather than clarifies.
2: Another possibility is there are sources of evidence which give us huge updates, even if the object level arguments in (e.g.) Superintelligence, The Precipice etc. are not among them. Per the linked conversation, maybe earliness gives a huge shift up from the astronomically adverse prior, so this plus the weak object level evidence gets you to lowish but not negligible credence.
Whether cashed out via prior or update, it seems important to make such considerations explicit, as the true case in favour of HH would include these considerations too. Yet the discussion of âhow far you should updateâ on p11-13ish doesnât mention these massive adjustments, instead noting reasons to be generally sceptical (e.g. fishiness) and the informal/âheuristic arguments for object level risks should not be getting you Bayes factors ~100 or more. This seems to be hiding the ball if in fact your posterior is ultimately 1000x or more your astronomically adverse prior, but not for reasons which are discussed (and so a reader may neglect to include when forming their own judgement).
*: I think thereâs also a presumptuous philosopher-type objection lurking here too. Folks (e.g.) could have used a similar argument to essentially rule out any x-risk from nuclear winter before any scientific analysis, as this implies significant credence in HH, which the argument above essentially rules out. Similar to âusing anthropics to huntâ, something seems to be going wrong where the mental exercise of estimating potentially-vast future populations can also allow us to infer the overwhelming probable answers for disparate matters in climate modelling, AI development, the control problem, civilisation recovery and so on.
âBut what is your posterior? Like Buck, Iâm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 /â 1 million.â
Iâm surprised this wasnât clear to you, which has made me think Iâve done a bad job of expressing myself.
Itâs the former, and for the reason of your explanation (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the blog post I describe what I call the outside-view arguments, including that weâre very early on, and say: âMy view is that, in the aggregate, these outside-view arguments should substantially update one from oneâs prior towards HoH, but not all the way to significant credence in HoH.[3] [3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.â
Iâm going to think more about your claim that in the article Iâm âhiding the ballâ. I say in the introduction that âthere are some strong arguments for thinking that this century might be unusually influentialâ, discuss the arguments that I think really should massively update us in section 5 of the article, and in that context I say âWe have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years.â Then in the conclusion I say: âThere are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too.â That seemed clear to me, but I should judge clarity by how readers interpret what Iâve written.
For my part, Iâm more partial to âblaming the readerâ, but (evidently) better people mete out better measure than I in turn.
Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. Iâd take ~0.3% to be âsignificantâ credence for some values of significant. âStrongâ âcompellingâ or âgoodâ arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200.
I also think quantitative articulation would help the reader (or at least this reader) better benchmark the considerations here. Taking the rough posterior of 0.1% and prior of 1 in 100 million, this implies a likelihood ratio of ~~100 000 - loosely, ultra-decisive evidence. If we partition out the risk-based considerations (which it discussion seems to set as âless than decisiveâ so <100), the other considerations (perhaps mostly those in S5) give you a LR of > ~1000 - loosely, very decisive evidence.
Yet the discussion of the considerations in S5 doesnât give the impression we should conclude they give us âmassive updatesâ. You note there are important caveats to these considerations, you say in summing up these arguments are âfar from watertightâ, and I also inferred the sort of criticisms given in S3 around our limited reasoning ability and scepticism of informal arguments would also apply here too. Hence my presumption these other considerations, although more persuasive than object level arguments around risks, would still end up below the LR ~ 100 for âdecisiveâ evidence, rather than much higher.
Another way this would help would be illustrating the uncertainty. Given some indicative priors you note vary by ten orders of magnitude, the prior is not just astronomical but extremely uncertain. By my lights, the update doesnât greatly reduce our uncertainty (and could compound it, given challenges in calibrating around very high LRs). If the posterior odds could be âout by 100 000x either wayâ the central estimate being at ~0.3% could still give you (given some naive log-uniform) 20%+ mass distributed at better than even odds of HH.
The moaning about hiding the ball arises from the sense this numerical articulation reveals (I think) some powerful objections the more qualitative treatment obscures. E.g.
Typical HH proponents are including considerations around earliness/âsingle planet/â etc. in their background knowledge/âprior when discussing object level risks. Noting the prior becomes astronomically adverse when we subtract these out of background knowledge, and so the object level case for (e.g.) AI risk canât possibly be enough to carry the day alone seems a bait-and-switch: you agree the prior becomes massively less astronomical when we include single planet etc. in background knowledge, and in fact things like âwe live on only one planetâ are in our background knowledge (and were being assumed at least tacitly by HH proponents).
The attempt to âboundâ object level arguments by their LR (e.g. âWell, these are informal, and it looks fishy, etc. so it is hard to see how you can get LR >100 from theseâ) doesnât seem persuasive when your view is that the set of germane considerations (all of which seem informal, have caveats attached, etc.) in concert are giving you an LR of ~100 000 or more. If this set of informal considerations can get you more than half way from the astronomical prior to significant credence, why be so sure additional ones (e.g.) articulating a given danger canât carry you the rest of the way?
I do a lot of forecasting, and I struggle to get a sense of what priors of 1/â 100 M or decisive evidence to the tune of LR 1000 would look like in âreal lifeâ scenarios. Numbers this huge (where you end up virtually âoff the end of the tailâ of your stipulated prior) raise worries about consilience (cf. âI guess the sub-prime morgage crisis was a 10 sigma eventâ), but moreover pragmatic defeat: there seems a lot to distrust in an epistemic procedure along the lines of âWith anthropics given stipulated subtracted background knowledge we end up with an astronomically minute prior (where we could be off by many orders of magnitude), but when we update on adding back in elements of our actual background knowledge this shoots up by many orders of magnitude (but we are likely still off by many orders of magnitude)â. Taking it face value would mean a minute update to our âpre theoretic priorâ on the topic before embarking on this exercise (providing these overlapped and was not as radically uncertain, varying no more than a couple rather than many orders of magnitude). If we suspect (which I think we should) this procedure of partitioning out background knowledge into update steps which approach log log variance and where we have minimal calibration is less reliable than using our intuitive gestalt over our background knowledge as whole, we should discount its deliverances still further.
Thanks Greg - I asked and it turned out I had one remaining day to make edits to the paper, so Iâve made some minor ones in a direction youâd like, though Iâm sure they wonât be sufficient to satisfy you.
Going to have to get back on with other work at this point, but I think your arguments are important, though the âbait and switchâ doesnât seem totally fairâe.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.
How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong?
This kind of stuff is pretty complicated so I might not be making sense here, but hereâs what I mean: I have some distribution over what model to be using to answer the âare we at HoHâ question, and each model has some probability that weâre at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it). It seems like your outside view model assigns approximately zero probability to HoH, and so if now is the HoH, itâs probably because we shouldnât be using your model, rather than because weâre in the tiny proportion of worlds in your model where now is HoH.
I think this distinction is important because it seems to me that the probability of HoH give your beliefs should be almost entirely determined by the prior and HoH-likelihood of models other than the one you proposedâif your central model is the outside-view model you proposed, and youâre 80% confident in that, then I suspect that the majority of your credence on HoH should come from the other 20% of your prior, and so the question of how much your outside-view-model updates based on evidence doesnât seem likely to be very important.
Asserting an astronomically adverse prior, then a massive update, yet being confident youâre in the right ballpark re. orders of magnitude does look pretty fishy though. For a few reasons:
First, (in the webpage version you quoted) you donât seem sure of a given prior probability, merely that it is âastronomicalâ: yet astronomical numbers (including variations you note about whether to multiply by how many accessible galaxies there are or not, etc.) vary by substantially more than three orders of magnitudeâyou note two possible prior probabilities (of being among the million most influential people) of 1 in a million trillion (10^-18) and 1 in a hundred million (10^-8) - a span of 10 orders of magnitude.
It seems hard to see how a Bayesian update from this (seemingly) extremely wide prior would give a central estimate at a (not astronomically minute) value, yet confidently rule against values âonlyâ 3 orders of magnitude higher (a distance a ten millionth the width of this implicit span in prior probability). [It also suggests the highest VoI is to winnow this huge prior range, rather than spending effort evaluating considerations around the likelihood ratio]
Second, whatever (very) small value we use for our prior probability, getting to non-astronomical posteriors implies likelihood ratios/âBayes factors which are huge. From (say) 10^-8 to 10^-4 is a factor of 10 000. As you say in your piece, this is much much stronger than the benchmark for decisive evidence of ~100. It seems hard to say (e.g.) evidence from the rate of economic growth is âdecisiveâ in this sense, and so it is hard to see how in concert with other heuristic considerations you get 10-100x more confirmation (indeed, your subsequent discussion seems to supply many defeaters exactly this). Further, similar to worries about calibration out on the tail, it seems unlikely many of us can accurately assess LRs > 100 which are not direct observations within orders of magnitude.
Third, priors should be consilient, and can be essentially refuted by posteriors. A prior that get surprised to the tune of a 1-in-millions should get hugely penalized versus any alternative (including naive intuitive gestalts) which do not. It seems particularly costly as non-negligible credences in (e.g.) nuclear winter, the industrial revolution being crucial etc. are facially represent this prior being surprised by â1 in large Xâ events at a rate much greater than 1/âX.
To end up with not-vastly lower posteriors than your interlocutors (presuming Buckâs suggestion of 0.1% is fair, and not something like 1/âmillion), it seems one asserts both a much lower prior which is mostly (but not completely) cancelled out by a much stronger update step. This prior seems to be ranging over many orders of magnitude, yet the posterior does notâyet it is hard to see where the orders of magnitude of better resolution are arising from (if we knew for sure the prior is 10^-12 versus knowing for sure it is 10^-8, shouldnât the posterior shift a lot between the two cases?)
It seems more reasonable to say âourâ prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.
Thanks, Greg. I really wasnât meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so Iâm sorry if I did.
âIt seems more reasonable to say âourâ prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion.â
I agree with this (though see for the discussion with Lukas for some clarification about what weâre talking about when we say âpriorsâ, i.e. are we building the fact that weâre early into our priors or not.).
But what is your posterior? Like Buck, Iâm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 /â 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at âface valueâ.
From this on page 13 I guess a generous estimate (/âupper bound) is something like 1/â 1 million for the âamong most important million peopleâ:
I.e. a prior of ~ 1/â 100 million (which is less averse than others you moot earlier), and a Bayes factor < 100 (i.e. we should not think the balance of reason, all considered, is âdecisiveâ evidence), so you end up at best at ~1/â 1 million. If this argument is right, you can be âsuper confidentâ giving a credence of 0.1% is wrong (out by an ratio of >~ 1000, the difference between ~ 1% and 91%), and vice-versa.
Yet I donât think your credence on âthis is the most important centuryâ is 1/â 1 million. Among other things it seems to imply we can essentially dismiss things like short TAI timelines, Bostrom-Yudkowsky AI accounts etc, as these are essentially upper-bounded by the 1/â 1M credence above.*
So (presuming Iâm right and you donât place negligible credence on these things) Iâm not sure how these things can be in reflective equilibrium.
1: âAmong the most important million peopleâ and âthis is the most important centuryâ are not the same thing, and so perhaps one has a (much) higher prior on the latter than the former. But if the action really was here, then the precisification of âhinge of historyâ as the former claim seems misguided: âOh, this being the most important century could have significant credence, but this other sort-of related proposition nonetheless has an astronomically adverse priorâ confuses rather than clarifies.
2: Another possibility is there are sources of evidence which give us huge updates, even if the object level arguments in (e.g.) Superintelligence, The Precipice etc. are not among them. Per the linked conversation, maybe earliness gives a huge shift up from the astronomically adverse prior, so this plus the weak object level evidence gets you to lowish but not negligible credence.
Whether cashed out via prior or update, it seems important to make such considerations explicit, as the true case in favour of HH would include these considerations too. Yet the discussion of âhow far you should updateâ on p11-13ish doesnât mention these massive adjustments, instead noting reasons to be generally sceptical (e.g. fishiness) and the informal/âheuristic arguments for object level risks should not be getting you Bayes factors ~100 or more. This seems to be hiding the ball if in fact your posterior is ultimately 1000x or more your astronomically adverse prior, but not for reasons which are discussed (and so a reader may neglect to include when forming their own judgement).
*: I think thereâs also a presumptuous philosopher-type objection lurking here too. Folks (e.g.) could have used a similar argument to essentially rule out any x-risk from nuclear winter before any scientific analysis, as this implies significant credence in HH, which the argument above essentially rules out. Similar to âusing anthropics to huntâ, something seems to be going wrong where the mental exercise of estimating potentially-vast future populations can also allow us to infer the overwhelming probable answers for disparate matters in climate modelling, AI development, the control problem, civilisation recovery and so on.
Thanks for this, Greg.
âBut what is your posterior? Like Buck, Iâm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 /â 1 million.â
Iâm surprised this wasnât clear to you, which has made me think Iâve done a bad job of expressing myself.
Itâs the former, and for the reason of your explanation (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the blog post I describe what I call the outside-view arguments, including that weâre very early on, and say: âMy view is that, in the aggregate, these outside-view arguments should substantially update one from oneâs prior towards HoH, but not all the way to significant credence in HoH.[3]
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.â
Iâm going to think more about your claim that in the article Iâm âhiding the ballâ. I say in the introduction that âthere are some strong arguments for thinking that this century might be unusually influentialâ, discuss the arguments that I think really should massively update us in section 5 of the article, and in that context I say âWe have seen that there are some compelling arguments for thinking that the present time is unusually influential. In particular, we are growing very rapidly, and civilisation today is still small compared to its potential future size, so any given unit of resources is a comparatively large fraction of the whole. I believe these arguments give us reason to think that the most influential people may well live within the next few thousand years.â Then in the conclusion I say: âThere are some good arguments for thinking that our time is very unusual, if we are at the start of a very long-lived civilisation: the fact that we are so early on, that we live on a single planet, and that we are at a period of rapid economic and technological progress, are all ways in which the current time is very distinctive, and therefore are reasons why we may be highly influential too.â That seemed clear to me, but I should judge clarity by how readers interpret what Iâve written.
For my part, Iâm more partial to âblaming the readerâ, but (evidently) better people mete out better measure than I in turn.
Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. Iâd take ~0.3% to be âsignificantâ credence for some values of significant. âStrongâ âcompellingâ or âgoodâ arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200.
I also think quantitative articulation would help the reader (or at least this reader) better benchmark the considerations here. Taking the rough posterior of 0.1% and prior of 1 in 100 million, this implies a likelihood ratio of ~~100 000 - loosely, ultra-decisive evidence. If we partition out the risk-based considerations (which it discussion seems to set as âless than decisiveâ so <100), the other considerations (perhaps mostly those in S5) give you a LR of > ~1000 - loosely, very decisive evidence.
Yet the discussion of the considerations in S5 doesnât give the impression we should conclude they give us âmassive updatesâ. You note there are important caveats to these considerations, you say in summing up these arguments are âfar from watertightâ, and I also inferred the sort of criticisms given in S3 around our limited reasoning ability and scepticism of informal arguments would also apply here too. Hence my presumption these other considerations, although more persuasive than object level arguments around risks, would still end up below the LR ~ 100 for âdecisiveâ evidence, rather than much higher.
Another way this would help would be illustrating the uncertainty. Given some indicative priors you note vary by ten orders of magnitude, the prior is not just astronomical but extremely uncertain. By my lights, the update doesnât greatly reduce our uncertainty (and could compound it, given challenges in calibrating around very high LRs). If the posterior odds could be âout by 100 000x either wayâ the central estimate being at ~0.3% could still give you (given some naive log-uniform) 20%+ mass distributed at better than even odds of HH.
The moaning about hiding the ball arises from the sense this numerical articulation reveals (I think) some powerful objections the more qualitative treatment obscures. E.g.
Typical HH proponents are including considerations around earliness/âsingle planet/â etc. in their background knowledge/âprior when discussing object level risks. Noting the prior becomes astronomically adverse when we subtract these out of background knowledge, and so the object level case for (e.g.) AI risk canât possibly be enough to carry the day alone seems a bait-and-switch: you agree the prior becomes massively less astronomical when we include single planet etc. in background knowledge, and in fact things like âwe live on only one planetâ are in our background knowledge (and were being assumed at least tacitly by HH proponents).
The attempt to âboundâ object level arguments by their LR (e.g. âWell, these are informal, and it looks fishy, etc. so it is hard to see how you can get LR >100 from theseâ) doesnât seem persuasive when your view is that the set of germane considerations (all of which seem informal, have caveats attached, etc.) in concert are giving you an LR of ~100 000 or more. If this set of informal considerations can get you more than half way from the astronomical prior to significant credence, why be so sure additional ones (e.g.) articulating a given danger canât carry you the rest of the way?
I do a lot of forecasting, and I struggle to get a sense of what priors of 1/â 100 M or decisive evidence to the tune of LR 1000 would look like in âreal lifeâ scenarios. Numbers this huge (where you end up virtually âoff the end of the tailâ of your stipulated prior) raise worries about consilience (cf. âI guess the sub-prime morgage crisis was a 10 sigma eventâ), but moreover pragmatic defeat: there seems a lot to distrust in an epistemic procedure along the lines of âWith anthropics given stipulated subtracted background knowledge we end up with an astronomically minute prior (where we could be off by many orders of magnitude), but when we update on adding back in elements of our actual background knowledge this shoots up by many orders of magnitude (but we are likely still off by many orders of magnitude)â. Taking it face value would mean a minute update to our âpre theoretic priorâ on the topic before embarking on this exercise (providing these overlapped and was not as radically uncertain, varying no more than a couple rather than many orders of magnitude). If we suspect (which I think we should) this procedure of partitioning out background knowledge into update steps which approach log log variance and where we have minimal calibration is less reliable than using our intuitive gestalt over our background knowledge as whole, we should discount its deliverances still further.
Thanks Greg - I asked and it turned out I had one remaining day to make edits to the paper, so Iâve made some minor ones in a direction youâd like, though Iâm sure they wonât be sufficient to satisfy you.
Going to have to get back on with other work at this point, but I think your arguments are important, though the âbait and switchâ doesnât seem totally fairâe.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.
How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong?
This kind of stuff is pretty complicated so I might not be making sense here, but hereâs what I mean: I have some distribution over what model to be using to answer the âare we at HoHâ question, and each model has some probability that weâre at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it). It seems like your outside view model assigns approximately zero probability to HoH, and so if now is the HoH, itâs probably because we shouldnât be using your model, rather than because weâre in the tiny proportion of worlds in your model where now is HoH.
I think this distinction is important because it seems to me that the probability of HoH give your beliefs should be almost entirely determined by the prior and HoH-likelihood of models other than the one you proposedâif your central model is the outside-view model you proposed, and youâre 80% confident in that, then I suspect that the majority of your credence on HoH should come from the other 20% of your prior, and so the question of how much your outside-view-model updates based on evidence doesnât seem likely to be very important.