Meta point. I would be curious to know why my comment was downvoted (2 karma in 4 votes without my own vote). For what is worth, I upvoted all your comments upstream my comment in this thread because I think they are valuable contributions to the discussion.
I have high credence in basically zero x-risk after [the time of perils /ā achieving technological maturity and then stabilizing /ā 2050].
By ābasically zeroā, you mean 0 in practice (e.g. for EV calculations)? I can see the above applying for some definitions of time of perils and technological maturity, but then I think they may be astronomically unlikely. I think it is often the case that people in EA circles are sensitive to the possibility of astronomical upside (e.g. 10^70 lives), but not to astronomically low chance of achieving that upside (e.g. 10^-60 chance of achieving 0 longterm existential risk). I explain this by a natural human tendency not to attribute super low probabilities for events whose mechanics we do not understand well (e.g. surviving the time of perils), such that e.g. people would attribute similar probabilities to a cosmic endowment of 10^50 and 10^70 lives. However, these may have super different probabilities for some distributions. For example, for a Pareto distribution (a power-law), the probability density of a given value is proportional to āvalueā^-(alpha + 1). So, for a tail index of alpha = 1, a value of 10^70 is 10^-40 (= 10^(-2*(70 ā 50))) as likely as a value of 10^50. So intuitions that the probability of 10^50 value is similar to that of 10^70 value would be completely off.
One can counter my particular example above by arguing that a power law is a priori implausible, and that we should use a more uninformative prior like a loguniform distribution. However, I feel like the choice of the prior would be somewhat arbitrary. For example, the upper bound of the prior loguniform distribution would be hard to define, and would be the major driver of the overall expected value. I think we should proceed with caution if prioritisation is hinging on decently arbitrary choices informed by almost no empirical evidence.
By the way, are you saying above that you expect 0 existential risk if we successfully pass 2050?
(I expect an effective population much much larger than 10^10 humans, but Iām not sure āpopulation sizeā will be a useful concept (e.g. maybe weāll decide to wait billions of years before converting resources to value), but thatās not the crux here.)
To be honest, I do not think the crux is the expected value of the future either. If one has the (longtermist) view that most of the expected value of interventions is in the far future, then one should assess neartermist interventions by how much they e.g. change extinction risk. I assume you would not claim that donating to the Long-Term Future Fund (LTFF), as I have been doing, decreases extinction risk 10^70 times as cost-effectively as donating to GiveWellās top charities? Personally, I do not even know whether GiveWellās top charities increase or decrease extinction risk, but I think the ratio between the absolute value of the cost-effectiveness of LTFF and such charities is much smaller than 10^70. I would maybe say 90 % chance of it being smaller than 10^10, although this is hard to quantify.
I can see the above applying for some definitions of time of perils and technological maturity, but then I think they may be astronomically unlikely.
What do you think about these considerations for expecting the time of perils to be very short in the grand scheme of things? It just doesnāt seem like the probability of possible future scenarios decays nearly fast enough to offset their greater value in expectation.
Those considerations make sense to me, but without further analysis it is not obvious to me whether they imply e.g. an annual existential risk in 2300 of 0.1 % or 10^-10, or e.g. a longterm existential risk of 10^-20 or 10^-60. I still tend to agree the expected value of the future is astronomical (e.g. at least 10^15 lives), but then the question is how easily one can increase it.
I still tend to agree the expected value of the future is astronomical (e.g. at least 10^15 lives), but then the question is how easily one can increase it.
If one grants that the time of perils will last at most only a few centuries, after which the per-century x-risk will be low enough to vindicate the hypothesis that the bulk of expected value lies in the long-term (even if one is uncertain about exactly how low it will drop), then deprioritizing longtermist interventions on tractability grounds seems hard to justify, because the concentration of total x-risk in the near-term means itās comparatively much easier to reduce.
I am not sure proximity in time is the best proxy for tractability. The ratio between the final and current global GDP seems better, as it accounts for both the time horizon, and rate of change/āgrowth over it. Intuitively, for a fixed time horizon, the higher the rate of change/āgrowth, the harder it is to predict the outcomes of our actions, i.e. tractability will tend to be lower. The higher tractability linked to a short time of perils may be roughly offset by the faster rate of change over it. Maybe Aschenbrennerās paper on existential risk and growth can inform this?
Note I am quite sympathetic to influencing the longterm future. As I said, I have been donating to the LTFF. However, I would disagree that donating to the LTFF is astronomically (e.g. 10 OOMs) better than to the Animal Welfare Fund.
Meta point. I would be curious to know why my comment was downvoted (2 karma in 4 votes without my own vote). For what is worth, I upvoted all your comments upstream my comment in this thread because I think they are valuable contributions to the discussion.
By ābasically zeroā, you mean 0 in practice (e.g. for EV calculations)? I can see the above applying for some definitions of time of perils and technological maturity, but then I think they may be astronomically unlikely. I think it is often the case that people in EA circles are sensitive to the possibility of astronomical upside (e.g. 10^70 lives), but not to astronomically low chance of achieving that upside (e.g. 10^-60 chance of achieving 0 longterm existential risk). I explain this by a natural human tendency not to attribute super low probabilities for events whose mechanics we do not understand well (e.g. surviving the time of perils), such that e.g. people would attribute similar probabilities to a cosmic endowment of 10^50 and 10^70 lives. However, these may have super different probabilities for some distributions. For example, for a Pareto distribution (a power-law), the probability density of a given value is proportional to āvalueā^-(alpha + 1). So, for a tail index of alpha = 1, a value of 10^70 is 10^-40 (= 10^(-2*(70 ā 50))) as likely as a value of 10^50. So intuitions that the probability of 10^50 value is similar to that of 10^70 value would be completely off.
One can counter my particular example above by arguing that a power law is a priori implausible, and that we should use a more uninformative prior like a loguniform distribution. However, I feel like the choice of the prior would be somewhat arbitrary. For example, the upper bound of the prior loguniform distribution would be hard to define, and would be the major driver of the overall expected value. I think we should proceed with caution if prioritisation is hinging on decently arbitrary choices informed by almost no empirical evidence.
By the way, are you saying above that you expect 0 existential risk if we successfully pass 2050?
To be honest, I do not think the crux is the expected value of the future either. If one has the (longtermist) view that most of the expected value of interventions is in the far future, then one should assess neartermist interventions by how much they e.g. change extinction risk. I assume you would not claim that donating to the Long-Term Future Fund (LTFF), as I have been doing, decreases extinction risk 10^70 times as cost-effectively as donating to GiveWellās top charities? Personally, I do not even know whether GiveWellās top charities increase or decrease extinction risk, but I think the ratio between the absolute value of the cost-effectiveness of LTFF and such charities is much smaller than 10^70. I would maybe say 90 % chance of it being smaller than 10^10, although this is hard to quantify.
Hi Vasco,
What do you think about these considerations for expecting the time of perils to be very short in the grand scheme of things? It just doesnāt seem like the probability of possible future scenarios decays nearly fast enough to offset their greater value in expectation.
Hi Pablo,
Those considerations make sense to me, but without further analysis it is not obvious to me whether they imply e.g. an annual existential risk in 2300 of 0.1 % or 10^-10, or e.g. a longterm existential risk of 10^-20 or 10^-60. I still tend to agree the expected value of the future is astronomical (e.g. at least 10^15 lives), but then the question is how easily one can increase it.
If one grants that the time of perils will last at most only a few centuries, after which the per-century x-risk will be low enough to vindicate the hypothesis that the bulk of expected value lies in the long-term (even if one is uncertain about exactly how low it will drop), then deprioritizing longtermist interventions on tractability grounds seems hard to justify, because the concentration of total x-risk in the near-term means itās comparatively much easier to reduce.
I am not sure proximity in time is the best proxy for tractability. The ratio between the final and current global GDP seems better, as it accounts for both the time horizon, and rate of change/āgrowth over it. Intuitively, for a fixed time horizon, the higher the rate of change/āgrowth, the harder it is to predict the outcomes of our actions, i.e. tractability will tend to be lower. The higher tractability linked to a short time of perils may be roughly offset by the faster rate of change over it. Maybe Aschenbrennerās paper on existential risk and growth can inform this?
Note I am quite sympathetic to influencing the longterm future. As I said, I have been donating to the LTFF. However, I would disagree that donating to the LTFF is astronomically (e.g. 10 OOMs) better than to the Animal Welfare Fund.