Combining some empirical evidence with a subjective guess does not necessarily make the conclusion more robust if the subjective guess is on shaky ground. An argument may only be as strong as its weakest link.
I would not expect the subjective judgements involved in RP’s welfare range estimates to be more robust than the subjective judgements involved in estimating the probability of an astronomically large future (or of the probability of extinction in the next 100 years).
I definitely agree that the subjetive guesses related to RP’s mainline welfare ranges are on shaky ground. However, I feel like they are justifiably on shaky ground. For example, RP used 9 models to determine their mainline welfare ranges, giving the same weight to each of them. I have no idea if this makes sense, but I find it hard to imagine which empirical evidence would inform the weights in a principled way.
In contrast, there is reasonable empirical evidence that effects of interventions decay over time. I guess quickly enough for the effects after 100 years to account for less than 10 % of the overall effect, which makes me doubt astronomical longterm impacts.
I would also say there is reasonable evidence that the risk of human extinction is very low. A random mammal species lasts 1 M years, which implies an annual extinction risk of 10^-6. Mammals have gone extinct due to gradual or abrupt climate change, or other species, and I think these sources of risk are much less likely to drive humans extinct. So I conclude the annual risk of human extinction is lower than 10^-6. I guess the risk 1 % as high, 10^-7 (= 10^(-6 − 2 + 1)) over the next 10 years. I do not think AI can be interpreted as other species because humans have lots of control over its evolution.
Combining some empirical evidence with a subjective guess does not necessarily make the conclusion more robust if the subjective guess is on shaky ground. An argument may only be as strong as its weakest link.
I would not expect the subjective judgements involved in RP’s welfare range estimates to be more robust than the subjective judgements involved in estimating the probability of an astronomically large future (or of the probability of extinction in the next 100 years).
Thanks, Toby.
I definitely agree that the subjetive guesses related to RP’s mainline welfare ranges are on shaky ground. However, I feel like they are justifiably on shaky ground. For example, RP used 9 models to determine their mainline welfare ranges, giving the same weight to each of them. I have no idea if this makes sense, but I find it hard to imagine which empirical evidence would inform the weights in a principled way.
In contrast, there is reasonable empirical evidence that effects of interventions decay over time. I guess quickly enough for the effects after 100 years to account for less than 10 % of the overall effect, which makes me doubt astronomical longterm impacts.
I would also say there is reasonable evidence that the risk of human extinction is very low. A random mammal species lasts 1 M years, which implies an annual extinction risk of 10^-6. Mammals have gone extinct due to gradual or abrupt climate change, or other species, and I think these sources of risk are much less likely to drive humans extinct. So I conclude the annual risk of human extinction is lower than 10^-6. I guess the risk 1 % as high, 10^-7 (= 10^(-6 − 2 + 1)) over the next 10 years. I do not think AI can be interpreted as other species because humans have lots of control over its evolution.