You claim your argument is different to longtermism because it is based on empirical evidence (which I take it you’re saying should be enough to override our prior scepticism of claims involving enormous value?), but I don’t fully understand what you mean by that. To me, an estimate of the likelihood of humanity colonizing the galaxy (which is all strong longtermism is based on) seems as robust, if not more robust, than an estimate of the welfare range of a nematode.
What is relevant for longtermist impact assessments is the increase in the probability of achieving astronomical welfare, which Iguess is astronomically lower than the original probability of this. For all the longtermist impact assessments I am aware of, such increase is always a purely subjective guess. My estimate of the welfare range of nematodes of 6.47*10^-6 is not a pure subjective guess. I derived it from RP’s mainline welfare ranges, which result from some pure subjective guesses, but also empirical evidence about the properties of the animals they assessed. The animal-years of soil animals affected per $ are also largely based on empirical evidence.
Combining some empirical evidence with a subjective guess does not necessarily make the conclusion more robust if the subjective guess is on shaky ground. An argument may only be as strong as its weakest link.
I would not expect the subjective judgements involved in RP’s welfare range estimates to be more robust than the subjective judgements involved in estimating the probability of an astronomically large future (or of the probability of extinction in the next 100 years).
I definitely agree that the subjetive guesses related to RP’s mainline welfare ranges are on shaky ground. However, I feel like they are justifiably on shaky ground. For example, RP used 9 models to determine their mainline welfare ranges, giving the same weight to each of them. I have no idea if this makes sense, but I find it hard to imagine which empirical evidence would inform the weights in a principled way.
In contrast, there is reasonable empirical evidence that effects of interventions decay over time. I guess quickly enough for the effects after 100 years to account for less than 10 % of the overall effect, which makes me doubt astronomical longterm impacts.
I would also say there is reasonable evidence that the risk of human extinction is very low. A random mammal species lasts 1 M years, which implies an annual extinction risk of 10^-6. Mammals have gone extinct due to gradual or abrupt climate change, or other species, and I think these sources of risk are much less likely to drive humans extinct. So I conclude the annual risk of human extinction is lower than 10^-6. I guess the risk 1 % as high, 10^-7 (= 10^(-6 − 2 + 1)) over the next 10 years. I do not think AI can be interpreted as other species because humans have lots of control over its evolution.
What is relevant for longtermist impact assessments is the increase in the probability of achieving astronomical welfare, which I guess is astronomically lower than the original probability of this. For all the longtermist impact assessments I am aware of, such increase is always a purely subjective guess. My estimate of the welfare range of nematodes of 6.47*10^-6 is not a pure subjective guess. I derived it from RP’s mainline welfare ranges, which result from some pure subjective guesses, but also empirical evidence about the properties of the animals they assessed. The animal-years of soil animals affected per $ are also largely based on empirical evidence.
Combining some empirical evidence with a subjective guess does not necessarily make the conclusion more robust if the subjective guess is on shaky ground. An argument may only be as strong as its weakest link.
I would not expect the subjective judgements involved in RP’s welfare range estimates to be more robust than the subjective judgements involved in estimating the probability of an astronomically large future (or of the probability of extinction in the next 100 years).
Thanks, Toby.
I definitely agree that the subjetive guesses related to RP’s mainline welfare ranges are on shaky ground. However, I feel like they are justifiably on shaky ground. For example, RP used 9 models to determine their mainline welfare ranges, giving the same weight to each of them. I have no idea if this makes sense, but I find it hard to imagine which empirical evidence would inform the weights in a principled way.
In contrast, there is reasonable empirical evidence that effects of interventions decay over time. I guess quickly enough for the effects after 100 years to account for less than 10 % of the overall effect, which makes me doubt astronomical longterm impacts.
I would also say there is reasonable evidence that the risk of human extinction is very low. A random mammal species lasts 1 M years, which implies an annual extinction risk of 10^-6. Mammals have gone extinct due to gradual or abrupt climate change, or other species, and I think these sources of risk are much less likely to drive humans extinct. So I conclude the annual risk of human extinction is lower than 10^-6. I guess the risk 1 % as high, 10^-7 (= 10^(-6 − 2 + 1)) over the next 10 years. I do not think AI can be interpreted as other species because humans have lots of control over its evolution.