Here’s ChatGPT’s summary of the post (after I cut a bunch out of it):
Many effective altruists (EAs) believe that existential risk (the risk of human extinction) is currently high and that efforts to mitigate this risk have extremely high expected value.
However, the relationship between these two beliefs is not straightforward. Using a series of models, it is shown that across a range of assumptions, the belief in high existential risk (Existential Risk Pessimism) tends to hinder, rather than support, the belief in the high expected value of risk mitigation efforts (the Astronomical Value Thesis).
The most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the Time of Perils Hypothesis, which posits that we are currently in a period of high risk, but this risk will decrease significantly in the future.
For the Time of Perils Hypothesis to support the Astronomical Value Thesis, it must involve a relatively short period of high risk and a very low level of risk thereafter.
Arguments for the Time of Perils Hypothesis that do not involve the development of artificial intelligence (AI) are not sufficient to justify the necessary form of the hypothesis.
It is suggested that the most likely way to ground a strong enough Time of Perils Hypothesis is through the development of superintelligent AI, which could radically and permanently lower the level of existential risk.
This has implications for the prioritization of existential risk mitigation as a cause area within effective altruism.
[Replaced with a direct quote] “I think this model is kind of misleading, and that the original astronomical waste argument is still strong. It seems to me that a ton of the work in this model is being done by the assumption of constant risk, even in post-peril worlds. I think this is pretty strange.”
The model makes the assumption of constant risk, which may be unrealistic.
The probability of existential risk may be inversely proportional to the current estimated value at stake, if it is assumed that civilization acts as a value maximizer and is able to effectively reduce risk.
I haven’t read this post very carefully, but at a glance, you might be interested in the gist of this post: Existential risk pessimism and the time of perils (note: see also the top comment, which I think makes really excellent points).
Here’s ChatGPT’s summary of the post (after I cut a bunch out of it):
Many effective altruists (EAs) believe that existential risk (the risk of human extinction) is currently high and that efforts to mitigate this risk have extremely high expected value.
However, the relationship between these two beliefs is not straightforward. Using a series of models, it is shown that across a range of assumptions, the belief in high existential risk (Existential Risk Pessimism) tends to hinder, rather than support, the belief in the high expected value of risk mitigation efforts (the Astronomical Value Thesis).
The most plausible way to combine Existential Risk Pessimism with the Astronomical Value Thesis is through the Time of Perils Hypothesis, which posits that we are currently in a period of high risk, but this risk will decrease significantly in the future.
For the Time of Perils Hypothesis to support the Astronomical Value Thesis, it must involve a relatively short period of high risk and a very low level of risk thereafter.
Arguments for the Time of Perils Hypothesis that do not involve the development of artificial intelligence (AI) are not sufficient to justify the necessary form of the hypothesis.
It is suggested that the most likely way to ground a strong enough Time of Perils Hypothesis is through the development of superintelligent AI, which could radically and permanently lower the level of existential risk.
This has implications for the prioritization of existential risk mitigation as a cause area within effective altruism.
Here’s its lightly edited summary of (a part of) the comment I linked:
[Replaced with a direct quote] “I think this model is kind of misleading, and that the original astronomical waste argument is still strong. It seems to me that a ton of the work in this model is being done by the assumption of constant risk, even in post-peril worlds. I think this is pretty strange.”
The model makes the assumption of constant risk, which may be unrealistic.
The probability of existential risk may be inversely proportional to the current estimated value at stake, if it is assumed that civilization acts as a value maximizer and is able to effectively reduce risk.