I would guess the probability is, for example, p(N) = 10^-N, which would imply the expected benefits approaching 0 as N increases (because the limit of N*10^-N is 0). I do not have a strong view about the specific function p(N) representing the probability, but I think reality is such that p(N)*N tends to 0 as N increases.
Might be missing something silly, but I think you’re still dodging the question. There is no specific N in the claim I gave you. This magician is claiming that they have a spell that given any N, will create N beings.
So are you just saying you assign that claim zero probability?
“I am a sorceror from another dimension, who has the power to conjure arbitrary numbers of sentient beings at will. How likely do you think this claim is?”
I would reply by saying the likelihood of that is arbitrarily close to 0, although not exactly 0, and noting the number of sentient beings to be created times the likelihood of them actually having the ability to create them tends to 0 as the number of sentient beings tends to infinity.
I would reply by saying the likelihood of that is arbitrarily close to 0, although not exactly 0
I believe this is mathematically impossible! But probably not worth going back and forth on this.
I actually basically agree with your response to the Pascal mugger problem here. I’m still very uncertain, but I think I would endorse:
Making decisions by maximizing expected value, even when dealing with tiny objective probabilities.
Assigning lower prior probability to claims in proportion to the size of impact they claim I can have, to avoid decision paralysis when considering situations involving potential enormous value.
Assigning a probability of literally zero to any claim that says I can influence arbitrarily high amounts of value, or infinite amounts of value, at least for the purposes of making decisions (but drawing a distinction between a claim having zero subjective ‘probability’ and a claim being impossible).
But I think this approach makes me sceptical of the argument you are making here as well. You claim your argument is different to longtermism because it is based on empirical evidence (which I take it you’re saying should be enough to override our prior scepticism of claims involving enormous value?), but I don’t fully understand what you mean by that. To me, an estimate of the likelihood of humanity colonizing the galaxy (which is all strong longtermism is based on) seems as robust, if not more robust, than an estimate of the welfare range of a nematode.
For instance, I don’t even know how you define units of welfare in a way that lets you make comparisons between a human and a nematode, let alone how you would go about measuring it empirically. I suspect it is likely impossible to define in a non-arbitrary way.
You claim your argument is different to longtermism because it is based on empirical evidence (which I take it you’re saying should be enough to override our prior scepticism of claims involving enormous value?), but I don’t fully understand what you mean by that. To me, an estimate of the likelihood of humanity colonizing the galaxy (which is all strong longtermism is based on) seems as robust, if not more robust, than an estimate of the welfare range of a nematode.
What is relevant for longtermist impact assessments is the increase in the probability of achieving astronomical welfare, which Iguess is astronomically lower than the original probability of this. For all the longtermist impact assessments I am aware of, such increase is always a purely subjective guess. My estimate of the welfare range of nematodes of 6.47*10^-6 is not a pure subjective guess. I derived it from RP’s mainline welfare ranges, which result from some pure subjective guesses, but also empirical evidence about the properties of the animals they assessed. The animal-years of soil animals affected per $ are also largely based on empirical evidence.
Combining some empirical evidence with a subjective guess does not necessarily make the conclusion more robust if the subjective guess is on shaky ground. An argument may only be as strong as its weakest link.
I would not expect the subjective judgements involved in RP’s welfare range estimates to be more robust than the subjective judgements involved in estimating the probability of an astronomically large future (or of the probability of extinction in the next 100 years).
I definitely agree that the subjetive guesses related to RP’s mainline welfare ranges are on shaky ground. However, I feel like they are justifiably on shaky ground. For example, RP used 9 models to determine their mainline welfare ranges, giving the same weight to each of them. I have no idea if this makes sense, but I find it hard to imagine which empirical evidence would inform the weights in a principled way.
In contrast, there is reasonable empirical evidence that effects of interventions decay over time. I guess quickly enough for the effects after 100 years to account for less than 10 % of the overall effect, which makes me doubt astronomical longterm impacts.
I would also say there is reasonable evidence that the risk of human extinction is very low. A random mammal species lasts 1 M years, which implies an annual extinction risk of 10^-6. Mammals have gone extinct due to gradual or abrupt climate change, or other species, and I think these sources of risk are much less likely to drive humans extinct. So I conclude the annual risk of human extinction is lower than 10^-6. I guess the risk 1 % as high, 10^-7 (= 10^(-6 − 2 + 1)) over the next 10 years. I do not think AI can be interpreted as other species because humans have lots of control over its evolution.
I think you’re trying to redefine the problem I gave you and I don’t think you’re allowed to do that.
In this problem the mugger is not threatening a specific number of beings, instead they are claiming to have a specific power. They are claiming that:
Given any positive whole number N, they can instantly and magically create N beings (and give them some high or low welfare).
You need to assign a probability to that claim. If it is anything other than literally zero, you will be vulnerable to mugging.
I would guess the probability is, for example, p(N) = 10^-N, which would imply the expected benefits approaching 0 as N increases (because the limit of N*10^-N is 0). I do not have a strong view about the specific function p(N) representing the probability, but I think reality is such that p(N)*N tends to 0 as N increases.
Might be missing something silly, but I think you’re still dodging the question. There is no specific N in the claim I gave you. This magician is claiming that they have a spell that given any N, will create N beings.
So are you just saying you assign that claim zero probability?
I would reply by saying the likelihood of that is arbitrarily close to 0, although not exactly 0, and noting the number of sentient beings to be created times the likelihood of them actually having the ability to create them tends to 0 as the number of sentient beings tends to infinity.
I believe this is mathematically impossible! But probably not worth going back and forth on this.
I actually basically agree with your response to the Pascal mugger problem here. I’m still very uncertain, but I think I would endorse:
Making decisions by maximizing expected value, even when dealing with tiny objective probabilities.
Assigning lower prior probability to claims in proportion to the size of impact they claim I can have, to avoid decision paralysis when considering situations involving potential enormous value.
Assigning a probability of literally zero to any claim that says I can influence arbitrarily high amounts of value, or infinite amounts of value, at least for the purposes of making decisions (but drawing a distinction between a claim having zero subjective ‘probability’ and a claim being impossible).
But I think this approach makes me sceptical of the argument you are making here as well. You claim your argument is different to longtermism because it is based on empirical evidence (which I take it you’re saying should be enough to override our prior scepticism of claims involving enormous value?), but I don’t fully understand what you mean by that. To me, an estimate of the likelihood of humanity colonizing the galaxy (which is all strong longtermism is based on) seems as robust, if not more robust, than an estimate of the welfare range of a nematode.
For instance, I don’t even know how you define units of welfare in a way that lets you make comparisons between a human and a nematode, let alone how you would go about measuring it empirically. I suspect it is likely impossible to define in a non-arbitrary way.
What is relevant for longtermist impact assessments is the increase in the probability of achieving astronomical welfare, which I guess is astronomically lower than the original probability of this. For all the longtermist impact assessments I am aware of, such increase is always a purely subjective guess. My estimate of the welfare range of nematodes of 6.47*10^-6 is not a pure subjective guess. I derived it from RP’s mainline welfare ranges, which result from some pure subjective guesses, but also empirical evidence about the properties of the animals they assessed. The animal-years of soil animals affected per $ are also largely based on empirical evidence.
Combining some empirical evidence with a subjective guess does not necessarily make the conclusion more robust if the subjective guess is on shaky ground. An argument may only be as strong as its weakest link.
I would not expect the subjective judgements involved in RP’s welfare range estimates to be more robust than the subjective judgements involved in estimating the probability of an astronomically large future (or of the probability of extinction in the next 100 years).
Thanks, Toby.
I definitely agree that the subjetive guesses related to RP’s mainline welfare ranges are on shaky ground. However, I feel like they are justifiably on shaky ground. For example, RP used 9 models to determine their mainline welfare ranges, giving the same weight to each of them. I have no idea if this makes sense, but I find it hard to imagine which empirical evidence would inform the weights in a principled way.
In contrast, there is reasonable empirical evidence that effects of interventions decay over time. I guess quickly enough for the effects after 100 years to account for less than 10 % of the overall effect, which makes me doubt astronomical longterm impacts.
I would also say there is reasonable evidence that the risk of human extinction is very low. A random mammal species lasts 1 M years, which implies an annual extinction risk of 10^-6. Mammals have gone extinct due to gradual or abrupt climate change, or other species, and I think these sources of risk are much less likely to drive humans extinct. So I conclude the annual risk of human extinction is lower than 10^-6. I guess the risk 1 % as high, 10^-7 (= 10^(-6 − 2 + 1)) over the next 10 years. I do not think AI can be interpreted as other species because humans have lots of control over its evolution.