Great post! You convinced me that the Astronomical Value Thesis is less likely than I thought.
I’d like to point out, though, that of the risks with which you labeled space exploration “less helpful”, by far the largest is AI risk. But it seems to me that any AI which is capable of wiping out humans on other planets would also, if aligned, be capable of strongly reducing existential risk, and therefore make the Time of Perils hypothesis true.
Bracketing AI --> Time of Perils: For the purpose of this discussion, I want to bracket arguments for the Time of Perils Hypothesis which rely on AI bringing an end to the Time of Perils. I think that your comment might be a very productive way to support conclusion 4: it’s very important to make sure we’re right about this AI stuff, because those views about AI might end up doing a lot of heavy lifting in supporting the Astronomical Value Thesis.
One reason why I don’t want to talk too much about AI here is that I suspect the discussion wouldn’t have much to do with the models in this post, so it would probably be a different post. Another reason I’m hesitant to broach these issues is that I think many of the debates about AI would probably take a good deal longer to settle. For now, I’m happy if you read me as leaving open questions about AI and the Time of Perils.
I do have some work in progress on the singularity hypothesis if you are interested. Shoot me an email!
A side point: One thing that might be worth emphasizing is that not everyone is leaning quite as strongly on AI in their thinking about existential risk. For example, Toby puts 10% chance on existential catastrophe from rogue AI, but 8.6% chance on remaining anthropogenic risks: pandemics; unforeseen; and other. If that is right then we might want to make sure we are giving enough weight to scenarios that don’t involve AI.
Great post! You convinced me that the Astronomical Value Thesis is less likely than I thought.
I’d like to point out, though, that of the risks with which you labeled space exploration “less helpful”, by far the largest is AI risk. But it seems to me that any AI which is capable of wiping out humans on other planets would also, if aligned, be capable of strongly reducing existential risk, and therefore make the Time of Perils hypothesis true.
Thanks! That’s an important point.
Bracketing AI --> Time of Perils: For the purpose of this discussion, I want to bracket arguments for the Time of Perils Hypothesis which rely on AI bringing an end to the Time of Perils. I think that your comment might be a very productive way to support conclusion 4: it’s very important to make sure we’re right about this AI stuff, because those views about AI might end up doing a lot of heavy lifting in supporting the Astronomical Value Thesis.
One reason why I don’t want to talk too much about AI here is that I suspect the discussion wouldn’t have much to do with the models in this post, so it would probably be a different post. Another reason I’m hesitant to broach these issues is that I think many of the debates about AI would probably take a good deal longer to settle. For now, I’m happy if you read me as leaving open questions about AI and the Time of Perils.
I do have some work in progress on the singularity hypothesis if you are interested. Shoot me an email!
A side point: One thing that might be worth emphasizing is that not everyone is leaning quite as strongly on AI in their thinking about existential risk. For example, Toby puts 10% chance on existential catastrophe from rogue AI, but 8.6% chance on remaining anthropogenic risks: pandemics; unforeseen; and other. If that is right then we might want to make sure we are giving enough weight to scenarios that don’t involve AI.