This doesn’t take away from your main point, but it would be some definate amount less wild if we won’t start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.
Some possible futures do feel relatively more “wild” to me, too, even if all of them are wild to a significant degree. If we suppose that wildness is actually pretty epistemically relevant (I’m not sure it is), then it could still matter a lot if some future is 10x wilder than another.
For example, take a prediction like this:
Humanity will build self-replicating robots and shoot them out into space at close to the speed of light; as they expand outward, they will construct giant spherical structures around all of the galaxy’s stars to extract tremendous volumes of energy; this energy will be used to power octillions of digital minds with unfathomable experiences; this process will start in the next thirty years, by which point we’ll already have transcended our bodies to reside on computers as brain emulation software.
A prediction like “none of the above happens; humanity hangs around and then dies out sometime in the next million years” definitely also feels wild in its own way. So does the prediction “all of the above happens, starting a few hundred years from now.” But both of these predictions still feel much less wild than the first one.
I suppose whether they actually are much less “wild” depends on one’s metric of wildness. I’m not sure how to think about that metric, though. If wildness is epistemically relevant, then presumably some forms of wildness are more epistemically relevant than others.
To say a bit more here, on the epistemic relevance of wildness:
I take it that one of the main purposes of this post is to push back against “fishiness arguments,” like the argument that Will makes in “Are We Living at the Hinge of History?”
The basic idea, of course, is that it’s a priori very unlikely that any given person would find themselves living at the hinge of history (and correctly recognise this). Due to the fallibility of human reasoning and due to various possible sources of bias, however, it’s not as unlikely that a given person would mistakenly conclude that they live at the HoH. Therefore, if someone comes to believe that they probably live at the HoH, we should think there’s a sizeable chance they’ve simply made a mistake.
As this line of argument is expressed in the post:
I know what you’re thinking: “The odds that we could live in such a significant time seem infinitesimal; the odds that Holden is having delusions of grandeur (on behalf of all of Earth, but still) seem far higher.”
The three critical probabilities here are:
Pr(Someone makes an epistemic mistake when thinking about their place in history)
Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake)
Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake)
The first describes the robustness of our reasoning. The second describes the prior probability that we would live at the HoH (and be able to recognise this fact if reasoning well). The third describes the level of bias in our reasoning, toward the HoH hypothesis, when we make mistakes.
I agree that all possible futures are “wild,” in some sense, but I don’t think this point necessarily bears much on the magnitudes of any of these probabilities.
For example, it would be sort of “wild” if long-distance space travel turns out to be impossible and our solar system turns out to be the only solar system to ever harbour life. It would also be “wild” if long-distance space travel starts to happen 100,000 years from now. But — at least at a glance — I don’t see how this wildness should inform our estimates for the three key probabilities.
One possible argument here, focusing on the bias factor, is something like: “We shouldn’t expect intellectuals to be significantly biased toward the conclusion that they live at the HoH, because the HoH Hypothesis isn’t substantially more appealing, salient, etc., than other beliefs they could have about the future.”
But I don’t think this argument would be right. For example: I think the hypothesis “the HoH will happen within my lifetime” and the hypothesis “the HoH will happen between 100,000 and 200,000 years from now” are pretty psychologically different.
To sum up: At least on a first pass, I don’t see why the point “all possible futures are wild” undermines the fishiness argument raised at the top of the post.
Living at a wild time, but HoH-argument is mistaken
Living at HoH
“Wild time” is almost as unlikely as HoH. Holden is trying to suggest it’s comparably intuitively wild, and it has pretty similar anthropic / “base rate” force.
So if your arguments look solid, “All futures are wild” makes hypothesis 2 look kind of lame/improbable—it has to posit a flaw in an argument, and also that you are living at a wildly improbable time. Meanwhile, hypothesis 1 merely has to posit a flaw in an argument, and hypothesis 3 merely has to a posit HoH (which is only somewhat more to swallow than a wild time).
So now if you are looking for errors, you probably want to focus for errors in the argument that we are living at a “wild time.” Realistically, I think you probably need to reject the possibility that the stars are real and that it is possible for humanity to spread to them. In particular, it’s not too helpful to e.g. be skeptical of some claim about AI timelines or about our ability to influence society’s trajectory.
This is kind of philosophically muddled because (I think) most participants in this discussion already accept a simulation-like argument that “Most observers like us are mistaken about whether it will be possible for them to colonize the stars.” If you set aside the simulation-style arguments, then I think the “all futures are wild” correction is more intuitively compelling.
(I think if you tell people “Yes, our good skeptical epistemology allows us to be pretty confident that the stars don’t exist” they will have a very different reaction than if you tell them “Our good skeptical epistemology tells us that we aren’t the most influential people ever.”)
Basically you’re saying that if we already know things are pretty wild (In Buck’s version: that we’re early humans) it’s a much less fishy step from there to very wild (‘we’re at HoH’) than it would be if we didn’t know things were pretty wild already.
Thanks for the clarification! I still feel a bit fuzzy on this line of thought, but hopefully understand a bit better now.
At least on my read, the post seems to discuss a couple different forms of wildness: let’s call them “temporal wildness” (we currently live at an unusually notable time) and “structural wildness” (the world is intuitively wild; the human trajectory is intuitively wild).[1]
I think I still don’t see the relevance of “structural wildness,” for evaluating fishiness arguments. As a silly example: Quantum mechanics is pretty intuitively wild, but the fact that we live in a world where QM is true doesn’t seem to substantially undermine fishiness arguments.
I think I do see, though, how claims about temporal wildness might be relevant. I wonder if this kind of argument feels approximately right to you (or to Holden):
Step 1: A priori, it’s unlikely that we would live even within 10000 years of the most consequential century in human history. However, despite this low prior, we have obviously strong reasons to think it’s at least plausible that we live this close to the HoH. Therefore, let’s say, a reasonable person should assign at least a 20% credence to the (wild) hypothesis: “The HoH will happen within the next 10000 years.”
Step 2: If we suppose that the HoH will happen with the next 10000 years, then a reasonable conditional credence that this century is the HoH should probably be something like 1⁄100. Therefore, it seems, our ‘new prior’ that this century is the HoH should be at least .2*.01 = .002. This is substantially higher than (e.g.) the more non-informative prior that Will’s paper starts with.
Fishiness arguments can obviously still be applied to the hypothesis presented in Step 1, in the usual way. But maybe the difference, here, is that the standard arguments/evidence that lend credibility to the more conservative hypothesis “The HoH will happen within the next 10000” are just pretty obviously robust — which makes it easier to overcome a low prior. Then, once we’ve established the plausibility of the more conservative hypothesis, we can sort of back-chain and use it to bump up our prior in the Strong HoH Hypothesis.
I suppose it also evokes an epistemic notion of wildness, when it describes certain confidence levels as “wild,” but I take it that “wild” here is mostly just a way of saying “irrational”?
Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.
I agree with both you and Arden that there is a certain sense in which the “conservative” view seems significantly less “wild” than my view, and that a reasonable person could find the “conservative” view significantly more attractive for this reason. But I still want to highlight that it’s an extremely “wild” view in the scheme of things, and I think we shouldn’t impose an inordinate burden of proof on updating from that view to mine.
The three critical probabilities here are Pr(Someone makes an epistemic mistake when thinking about their place in history), Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake), and Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake).
I think the more decision relevant probabilities involve “Someone believes they should act as if they live at the HoH” rather than “Someone believes they live at the HoH”. Our actions may be much less important if ‘this is all a dream/simulation’ (for example). We should make our decisions in the way we wish everyone-similar-to-us-across-the-multiverse make their decisions.
As an analogy, suppose Alice finds herself getting elected as the president of the US. Let’s imagine there are 10100 citizens in the US. So Alice reasons that it’s way more likely that she is delusional than she actually being the president of the US. Should she act as if she is the president of the US anyway, or rather spend her time trying to regain her grip on reality? The 10100 citizens want everyone in her situation to choose the former. It is critical to have a functioning president. And it does not matter if there are many delusional citizens who act as if they are the president. Their “mistake” does not matter. What matters is how the real president acts.
This is fantastic.
This doesn’t take away from your main point, but it would be some definate amount less wild if we won’t start exploring space for 100k years, right? Depending on how much less wild that would be, I could imagine it being enough to convince someone of a conservative view.
Some possible futures do feel relatively more “wild” to me, too, even if all of them are wild to a significant degree. If we suppose that wildness is actually pretty epistemically relevant (I’m not sure it is), then it could still matter a lot if some future is 10x wilder than another.
For example, take a prediction like this:
A prediction like “none of the above happens; humanity hangs around and then dies out sometime in the next million years” definitely also feels wild in its own way. So does the prediction “all of the above happens, starting a few hundred years from now.” But both of these predictions still feel much less wild than the first one.
I suppose whether they actually are much less “wild” depends on one’s metric of wildness. I’m not sure how to think about that metric, though. If wildness is epistemically relevant, then presumably some forms of wildness are more epistemically relevant than others.
To say a bit more here, on the epistemic relevance of wildness:
I take it that one of the main purposes of this post is to push back against “fishiness arguments,” like the argument that Will makes in “Are We Living at the Hinge of History?”
The basic idea, of course, is that it’s a priori very unlikely that any given person would find themselves living at the hinge of history (and correctly recognise this). Due to the fallibility of human reasoning and due to various possible sources of bias, however, it’s not as unlikely that a given person would mistakenly conclude that they live at the HoH. Therefore, if someone comes to believe that they probably live at the HoH, we should think there’s a sizeable chance they’ve simply made a mistake.
As this line of argument is expressed in the post:
The three critical probabilities here are:
Pr(Someone makes an epistemic mistake when thinking about their place in history)
Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake)
Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake)
The first describes the robustness of our reasoning. The second describes the prior probability that we would live at the HoH (and be able to recognise this fact if reasoning well). The third describes the level of bias in our reasoning, toward the HoH hypothesis, when we make mistakes.
I agree that all possible futures are “wild,” in some sense, but I don’t think this point necessarily bears much on the magnitudes of any of these probabilities.
For example, it would be sort of “wild” if long-distance space travel turns out to be impossible and our solar system turns out to be the only solar system to ever harbour life. It would also be “wild” if long-distance space travel starts to happen 100,000 years from now. But — at least at a glance — I don’t see how this wildness should inform our estimates for the three key probabilities.
One possible argument here, focusing on the bias factor, is something like: “We shouldn’t expect intellectuals to be significantly biased toward the conclusion that they live at the HoH, because the HoH Hypothesis isn’t substantially more appealing, salient, etc., than other beliefs they could have about the future.”
But I don’t think this argument would be right. For example: I think the hypothesis “the HoH will happen within my lifetime” and the hypothesis “the HoH will happen between 100,000 and 200,000 years from now” are pretty psychologically different.
To sum up: At least on a first pass, I don’t see why the point “all possible futures are wild” undermines the fishiness argument raised at the top of the post.
We were previously comparing two hypotheses:
HoH-argument is mistaken
Living at HoH
Now we’re comparing three:
“Wild times”-argument is mistaken
Living at a wild time, but HoH-argument is mistaken
Living at HoH
“Wild time” is almost as unlikely as HoH. Holden is trying to suggest it’s comparably intuitively wild, and it has pretty similar anthropic / “base rate” force.
So if your arguments look solid, “All futures are wild” makes hypothesis 2 look kind of lame/improbable—it has to posit a flaw in an argument, and also that you are living at a wildly improbable time. Meanwhile, hypothesis 1 merely has to posit a flaw in an argument, and hypothesis 3 merely has to a posit HoH (which is only somewhat more to swallow than a wild time).
So now if you are looking for errors, you probably want to focus for errors in the argument that we are living at a “wild time.” Realistically, I think you probably need to reject the possibility that the stars are real and that it is possible for humanity to spread to them. In particular, it’s not too helpful to e.g. be skeptical of some claim about AI timelines or about our ability to influence society’s trajectory.
This is kind of philosophically muddled because (I think) most participants in this discussion already accept a simulation-like argument that “Most observers like us are mistaken about whether it will be possible for them to colonize the stars.” If you set aside the simulation-style arguments, then I think the “all futures are wild” correction is more intuitively compelling.
(I think if you tell people “Yes, our good skeptical epistemology allows us to be pretty confident that the stars don’t exist” they will have a very different reaction than if you tell them “Our good skeptical epistemology tells us that we aren’t the most influential people ever.”)
Am I right in thinking Paul your argument here is very similar to Buck’s in this post? https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential.
Basically you’re saying that if we already know things are pretty wild (In Buck’s version: that we’re early humans) it’s a much less fishy step from there to very wild (‘we’re at HoH’) than it would be if we didn’t know things were pretty wild already.
Thanks for the clarification! I still feel a bit fuzzy on this line of thought, but hopefully understand a bit better now.
At least on my read, the post seems to discuss a couple different forms of wildness: let’s call them “temporal wildness” (we currently live at an unusually notable time) and “structural wildness” (the world is intuitively wild; the human trajectory is intuitively wild).[1]
I think I still don’t see the relevance of “structural wildness,” for evaluating fishiness arguments. As a silly example: Quantum mechanics is pretty intuitively wild, but the fact that we live in a world where QM is true doesn’t seem to substantially undermine fishiness arguments.
I think I do see, though, how claims about temporal wildness might be relevant. I wonder if this kind of argument feels approximately right to you (or to Holden):
Fishiness arguments can obviously still be applied to the hypothesis presented in Step 1, in the usual way. But maybe the difference, here, is that the standard arguments/evidence that lend credibility to the more conservative hypothesis “The HoH will happen within the next 10000” are just pretty obviously robust — which makes it easier to overcome a low prior. Then, once we’ve established the plausibility of the more conservative hypothesis, we can sort of back-chain and use it to bump up our prior in the Strong HoH Hypothesis.
I suppose it also evokes an epistemic notion of wildness, when it describes certain confidence levels as “wild,” but I take it that “wild” here is mostly just a way of saying “irrational”?
Ben, that sounds right to me. I also agree with what Paul said. And my intent was to talk about what you call temporal wildness, not what you call structural wildness.
I agree with both you and Arden that there is a certain sense in which the “conservative” view seems significantly less “wild” than my view, and that a reasonable person could find the “conservative” view significantly more attractive for this reason. But I still want to highlight that it’s an extremely “wild” view in the scheme of things, and I think we shouldn’t impose an inordinate burden of proof on updating from that view to mine.
I think the more decision relevant probabilities involve “Someone believes they should act as if they live at the HoH” rather than “Someone believes they live at the HoH”. Our actions may be much less important if ‘this is all a dream/simulation’ (for example). We should make our decisions in the way we wish everyone-similar-to-us-across-the-multiverse make their decisions.
As an analogy, suppose Alice finds herself getting elected as the president of the US. Let’s imagine there are 10100 citizens in the US. So Alice reasons that it’s way more likely that she is delusional than she actually being the president of the US. Should she act as if she is the president of the US anyway, or rather spend her time trying to regain her grip on reality? The 10100 citizens want everyone in her situation to choose the former. It is critical to have a functioning president. And it does not matter if there are many delusional citizens who act as if they are the president. Their “mistake” does not matter. What matters is how the real president acts.