A much more powerful assumption is needed (one that combines all of these weaker assumptions).
Could you explain why the Time of Perils assumption is stronger? It seems to me to be consistent with rejecting the previous assumptions; for example, you could navigate the Time of Perils but have no impact on future years, especially if the risk there was already very low. Rather than being stronger, it just seems like a different model to me.
I was also disappointed to not see AGI discussed in relation to the Time of Perils. The only mention is this footnote:
One argument which I will not address is that the development of artificial intelligence may bring an end to the time of perils, for example by putting human civilization under the control of a single entity capable of managing existential risks (Bostrom 2014). The response to this argument turns on a number of conceptual and empirical questions surrounding artificial intelligence that are difficult to address in the space of a paper.
This seems very unsatisfactory to me, because as far as I can see (perhaps I am mistaken) AGI is the main reason most people believe in the Time of Perils hypothesis. Declaring it to be out of scope doesn’t mean you have struck a blow against Xrisk mitigation, it means you have erected a strawman.
On a much more mundane note, I found this paragraph very confusing:
At this point, the most helpful response would be to ask Ord for more details. We are not told much about why we should expect humanity to grow in virtue or how this growth could lead to a quick and substantial drop in existential risk. Without these details, it is hard to place much stock in the appeal to civilizational virtue. But we may get some handle on the prospects for Ord’s argument by thinking through some particular civilizational virtues.
Doesn’t Toby work in the same office as GPI? If the most helpful thing to do would be to ask him for details… why not do that?
This seems very unsatisfactory to me, because as far as I can see (perhaps I am mistaken) AGI is the main reason most people believe in the Time of Perils hypothesis. Declaring it to be out of scope doesn’t mean you have struck a blow against Xrisk mitigation, it means you have erected a strawman.
This line of reasoning usually relies on the “imminent godlike AGI” hypothesis. Thorstadt believes to be wrong, but it takes a lot of highly involved argumentation to make the case for this (I’ve taken a stab at it here).
I think he’s right to leave it out. One paper should be about one thing, not try and debunk the entire x-risk argument all in one go, otherwise it would just become a sprawling, bloated mess. The important conclusion here is that the astronomical risk hypothesis relies on the time of perils hypothesis: previously i think a lot of people thought they were independent.
One paper should be about one thing, not try and debunk the entire x-risk argument all in one go, otherwise it would just become a sprawling, bloated mess. The important conclusion here is that the astronomical risk hypothesis relies on the time of perils hypothesis: previously i think a lot of people thought they were independent.
I think this (just arguing the Astronomical Risk Hypothesis relies on the Time of Perils Hypothesis (TOPH) pessimism, and leaving it at that) would be a very reasonable thing to do. But that’s not what this paper actually does:
Pages 19 through 32 go through three arguments against TOPH. It’s just that what seems to me to be the strongest argument is relegated to a footnote. In your version of this paper, all these pages could be omitted.
The conclusion acts as if TOPH has been refuted, rather than merely shown to be an important premise. If TOPH is true then pessimism about TOPH supports the astronomical risk hypothesis.
“One implication of this paper is that existential risk pessimism tends to favor the original statement of the demandingness problem. If existential risk is indeed high, then it is relatively less important to mitigate existential risk, and it will be also relatively less important to save resources for the future, since future gains are less likely to be realized. For discussion see Kagan (1984), Mulgan (2001) and Sobel (2007). That means it may indeed be better for consequentialists to direct their resources towards present people. This should be welcome news for the consequentialist, since it raises the possibility of avoiding a strengthened form of the demandingness objection to consequentialism.”
“We have seen that existential risk pessimism strongly reduces the value of existential risk mitigation.”
“But in general, the models of this paper suggest that existential risk mitigation may not be as important as many pessimists have taken it to be, and crucially that pessimism is a hindrance rather than a support to the case for existential risk mitigation. The case for existential risk mitigation is strongest under more optimistic assumptions about existential risk.”
See also the abstract:
“It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true.”
Or the conclusion from this post:
Thorstad concludes that it seems unlikely that we live in the time of perils. This implies that reducing existential risk is probably not overwhelmingly valuable and that the case for reducing existential risk is strongest when the risk is low.
The three arguments he did show are the most popular arguments in the academic literature, it makes sense to give them priority. The “godlike aligned AI will fix everything forever” hypothesis might be popular within a few subcultures, but in my opinion is severely unproven.
You have changed my mind though, I think that he should have addressed it with more than a footnote. If he added in a section briefly explaining why he thought it was bunk, would you be satisfied?
Could you explain why the Time of Perils assumption is stronger? It seems to me to be consistent with rejecting the previous assumptions; for example, you could navigate the Time of Perils but have no impact on future years, especially if the risk there was already very low. Rather than being stronger, it just seems like a different model to me.
I was also disappointed to not see AGI discussed in relation to the Time of Perils. The only mention is this footnote:
This seems very unsatisfactory to me, because as far as I can see (perhaps I am mistaken) AGI is the main reason most people believe in the Time of Perils hypothesis. Declaring it to be out of scope doesn’t mean you have struck a blow against Xrisk mitigation, it means you have erected a strawman.
On a much more mundane note, I found this paragraph very confusing:
Doesn’t Toby work in the same office as GPI? If the most helpful thing to do would be to ask him for details… why not do that?
This line of reasoning usually relies on the “imminent godlike AGI” hypothesis. Thorstadt believes to be wrong, but it takes a lot of highly involved argumentation to make the case for this (I’ve taken a stab at it here).
I think he’s right to leave it out. One paper should be about one thing, not try and debunk the entire x-risk argument all in one go, otherwise it would just become a sprawling, bloated mess. The important conclusion here is that the astronomical risk hypothesis relies on the time of perils hypothesis: previously i think a lot of people thought they were independent.
I think this (just arguing the Astronomical Risk Hypothesis relies on the Time of Perils Hypothesis (TOPH) pessimism, and leaving it at that) would be a very reasonable thing to do. But that’s not what this paper actually does:
Pages 19 through 32 go through three arguments against TOPH. It’s just that what seems to me to be the strongest argument is relegated to a footnote. In your version of this paper, all these pages could be omitted.
The conclusion acts as if TOPH has been refuted, rather than merely shown to be an important premise. If TOPH is true then pessimism about TOPH supports the astronomical risk hypothesis.
“One implication of this paper is that existential risk pessimism tends to favor the original statement of the demandingness problem. If existential risk is indeed high, then it is relatively less important to mitigate existential risk, and it will be also relatively less important to save resources for the future, since future gains are less likely to be realized. For discussion see Kagan (1984), Mulgan (2001) and Sobel (2007). That means it may indeed be better for consequentialists to direct their resources towards present people. This should be welcome news for the consequentialist, since it raises the possibility of avoiding a strengthened form of the demandingness objection to consequentialism.”
“We have seen that existential risk pessimism strongly reduces the value of existential risk mitigation.”
“But in general, the models of this paper suggest that existential risk mitigation may not be as important as many pessimists have taken it to be, and crucially that pessimism is a hindrance rather than a support to the case for existential risk mitigation. The case for existential risk mitigation is strongest under more optimistic assumptions about existential risk.”
See also the abstract:
“It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true.”
Or the conclusion from this post:
Thorstad concludes that it seems unlikely that we live in the time of perils. This implies that reducing existential risk is probably not overwhelmingly valuable and that the case for reducing existential risk is strongest when the risk is low.
The three arguments he did show are the most popular arguments in the academic literature, it makes sense to give them priority. The “godlike aligned AI will fix everything forever” hypothesis might be popular within a few subcultures, but in my opinion is severely unproven.
You have changed my mind though, I think that he should have addressed it with more than a footnote. If he added in a section briefly explaining why he thought it was bunk, would you be satisfied?