Yeah, it sounds like this might not be appropriate for someone with your credences, though I’m confused by what you say here:
I mentioned point/mean probability estimates, but my upper bounds (e.g. 90th percentile) are quite close, as they are strongly limited by the means. For example, if one’s mean probability is 10^-10, the 90th percentile probability cannot be higher than 10^-9, otherwise the mean probability would be higher than 10^-10 (= (1 − 0.90)*10^-9), which is the mean. So my point remains as long as you think my point/mean estimates are reasonable.
I’m not sure what you mean by this. What are you taking the mean of, and which type of mean, and why? It sounds like maybe you’re talking about the arithmetic mean? If so that isn’t how I would think about unknown probabilities fwiw. IMO it seems more appropriate to use a geometric mean to express this kind of uncertainty, or explicitly model the distribution of possible probabilities. I don’t think either approach should limit your high-end credences.
Makes sense. I liked that post. I think my comment was probably overly crictical, and not related specifically to your series. I was not clear, but I meant to point to the greater value of using standard cost-effectiveness analyses (relative to using a model like yours) given my current empirical beliefs (astronomically low non-TAI extinction risk).
Yeah, fair enough :)
If one thinks the probability of extinction or permanent collapse without TAI is astronomically low (as I do)
Have you written somewhere about why you think permanent collapse is so unlikely? The more I think about it, the higher my credence seems to get :\
I have the impression there is often little data to validate them, and therefore think significant weight should be given to a prior simply informed by how long a given transition took.
I’m not saying the sexual selection theory is strongly likely to be correct. But it seems to be taken seriously by evolutionary psychologists, and if you’re finding that other theories of human intelligence give ultra-high credence of a new species evolving, it seems like that credence should be substantially lowered by even a modest belief in the plausibility of such theories.
What are you taking the mean of, and which type of mean, and why? It sounds like maybe you’re talking about the arithmetic mean?
Yes, I was referring to the arithmetic mean of a probability distribution. To illustrate, if I thought the probability of a given event was uniformly distributed between 0 and 1, the mean (best guess) probability would be 50 % (= (0 + 1)/2).
IMO it seems more appropriate to use a geometric mean to express this kind of uncertainty, or explicitly model the distribution of possible probabilities.
I agree the median, geometric mean, or geometric mean of odds are usually better than the mean to aggregate forecasts[1]. However, if we aggregated multiple probability distributions from various models/forecasters, we would end up with a final probability distribution, and I am saying our final point estimate corresponds to the mean of this distribution. Jaime Sevilla illustrated this here.
I don’t think either approach should limit your high-end credences.
Maybe it helps to think about this in the context of a distribution which is not over a probability. If we have a distribution over possible profits, and our expected profit is 100 $, it cannot be the case that the 90th percentile profit is 1 M$, because in this case the expected profit would be at least 100 k$ (= (1 − 0.90)*1*10^6), which is much larger than 100 $.
You may want to check Joe Carlsmith’s thoughts on this topic in the context of AI risk.
Have you written somewhere about why you think permanent collapse is so unlikely? The more I think about it, the higher my credence seems to get :\
No, at least not in any depth. I think permanent collapse would require very large population and infrastructure losses, but I see these as very unlikely, at least in the absence of TAI. I estimated a probability of 3.29*10^-6 of the climatic effects of nuclear war before 2050 killing 50 % of the global population (based on the distribution I defined here for the famine death rate). Pandemics would not directly cause infrastructure loss. Indirectly, there could be infrastructure loss due people stopping maintenance activities out of fear of being infected, but I guess this requires a level of lethality which makes the pandemic very unlikely.
Besides more specific considerations like the above, I have consistently ended up arriving to tail risk estimates much lower than canonical ones from the effective altruism community. So, instead of regarding these as a prior as I used to do, now I immediately start from a lower prior, as I should not expect by risk estimates to go down/up[2]. For context on me arriving to lower tail risk estimates, you can check the posts I linked at the start of my 1st comment. Here are 2 concrete examples I discussed elsewhere:
Luisa Rodriguez’s analyses, which are arguably somewhat canonical in the effective altruism community, imply 630 M expected deaths before 2050 from nuclear wars between the United States and Russia, whereas I estimated just 2 % of that.
Denkenberger 2022impliesthe value of the future would decrease by 12.0 % given a 10 % agricultural shortfall, which corresponds to an injection of soot into the stratosphere of around 5 Tg[3], whereas I think the longterm impact of this would be negligible. Even given human extinction, I guess the value of the future would only decrease by 0.0513 % (see rough calculation in my 1st comment).
Relatedly, my extinction risk estimatesaremuch lower than Toby Ord’s existential risk estimates given inThe Precipice.
I aggregated probabilities using the median to estimate my prior extinction risk for wars and terrorist attacks, and using the geometric mean to obtain my nuclear war extinction risk.
Yeah, it sounds like this might not be appropriate for someone with your credences, though I’m confused by what you say here:
I’m not sure what you mean by this. What are you taking the mean of, and which type of mean, and why? It sounds like maybe you’re talking about the arithmetic mean? If so that isn’t how I would think about unknown probabilities fwiw. IMO it seems more appropriate to use a geometric mean to express this kind of uncertainty, or explicitly model the distribution of possible probabilities. I don’t think either approach should limit your high-end credences.
Yeah, fair enough :)
Have you written somewhere about why you think permanent collapse is so unlikely? The more I think about it, the higher my credence seems to get :\
I’m not saying the sexual selection theory is strongly likely to be correct. But it seems to be taken seriously by evolutionary psychologists, and if you’re finding that other theories of human intelligence give ultra-high credence of a new species evolving, it seems like that credence should be substantially lowered by even a modest belief in the plausibility of such theories.
Yes, I was referring to the arithmetic mean of a probability distribution. To illustrate, if I thought the probability of a given event was uniformly distributed between 0 and 1, the mean (best guess) probability would be 50 % (= (0 + 1)/2).
I agree the median, geometric mean, or geometric mean of odds are usually better than the mean to aggregate forecasts[1]. However, if we aggregated multiple probability distributions from various models/forecasters, we would end up with a final probability distribution, and I am saying our final point estimate corresponds to the mean of this distribution. Jaime Sevilla illustrated this here.
Maybe it helps to think about this in the context of a distribution which is not over a probability. If we have a distribution over possible profits, and our expected profit is 100 $, it cannot be the case that the 90th percentile profit is 1 M$, because in this case the expected profit would be at least 100 k$ (= (1 − 0.90)*1*10^6), which is much larger than 100 $.
You may want to check Joe Carlsmith’s thoughts on this topic in the context of AI risk.
No, at least not in any depth. I think permanent collapse would require very large population and infrastructure losses, but I see these as very unlikely, at least in the absence of TAI. I estimated a probability of 3.29*10^-6 of the climatic effects of nuclear war before 2050 killing 50 % of the global population (based on the distribution I defined here for the famine death rate). Pandemics would not directly cause infrastructure loss. Indirectly, there could be infrastructure loss due people stopping maintenance activities out of fear of being infected, but I guess this requires a level of lethality which makes the pandemic very unlikely.
Besides more specific considerations like the above, I have consistently ended up arriving to tail risk estimates much lower than canonical ones from the effective altruism community. So, instead of regarding these as a prior as I used to do, now I immediately start from a lower prior, as I should not expect by risk estimates to go down/up[2]. For context on me arriving to lower tail risk estimates, you can check the posts I linked at the start of my 1st comment. Here are 2 concrete examples I discussed elsewhere:
Luisa Rodriguez’s analyses, which are arguably somewhat canonical in the effective altruism community, imply 630 M expected deaths before 2050 from nuclear wars between the United States and Russia, whereas I estimated just 2 % of that.
Denkenberger 2022implies the value of the future would decrease by 12.0 % given a 10 % agricultural shortfall, which corresponds to an injection of soot into the stratosphere of around 5 Tg[3], whereas I think the longterm impact of this would be negligible. Even given human extinction, I guess the value of the future would only decrease by 0.0513 % (see rough calculation in my 1st comment).
Relatedly, my extinction risk estimates are much lower than Toby Ord’s existential risk estimates given in The Precipice.
I aggregated probabilities using the median to estimate my prior extinction risk for wars and terrorist attacks, and using the geometric mean to obtain my nuclear war extinction risk.
If I expected my best guess to go up/down, I should just update my best guess now to the value I expect it will converge to.
Xia 2022 predicts a shortfall of 7.0 % for 5 Tg without adaptation (see last row of Table S2).