These arguments appeal to phenomenal stakes implying that, using expected value reasoning, even a very small probability of the bad thing happening means we should try to reduce the risk, provided there is some degree of tractability in doing so.
Is the reason you dismiss such arguments because:
You reject EV reasoning if the probabilities are sufficiently small (i.e. anti-fanaticism)
There are issues with this response e.g. here to give one
You think the probabilities cited are too arbitrary so you don’t take the argument seriously
But the specific numerical probabilities themselves are not super important in longtermist cases. Usually, because of the astronomical stakes, the important thing is that there is a “non-negligible” probability decrease we can achieve. Much has been written about why there might be non-negligible x-risk from AI or biosecurity etc. and that there are things we can do to reduce this risk. The actual numerical probabilities themselves are insanely hard to estimate, but it’s also not that important to do so.
You reject the arguments that we can reduce x-risk in a non-negligible way (e.g. from AI, biosecurity etc.)
When people say “even if there’s a 1% chance” without providing any other evidence, I have no reason to believe there is a 1% chance vs 0.001% or a much smaller number.
I think you’re getting hung up on the specific numbers which I personally think are irrelevant. What about if one says something like:
“Given arguments put forward by leading AI researchers such as Eliezer Yudkowsky, Nick Bostrom, Stuart Russell and Richard Ngo, it seems that there is a very real possibility that we will create superintelligent AI one day. Furthermore, we are currently uncertain about how we can ensure such an AI would be aligned to our interests. A superintelligent AI that is not aligned to our interests could clearly bring about highly undesirable states of the world that could persist for a very long time, if not forever. There seem to be tractable ways to increase the probability that AI will be aligned to our interests, such as through alignment research or policy/regulation meaning such actions are a very high priority”.
There’s a lot missing from that but I don’t want to cover all the object-level arguments here. My point is that waving it all away by saying that a specific probability someone has cited is arbitrary seems wrong to me. You would need to counter the object-level arguments put forward by leading researchers. Do you find those arguments weak?
Ah gotcha. So you’re specifically objecting to people who say ‘even if there’s a 1% chance’ based on vague intuition, and not to people who think carefully about AI risk, conclude that there’s a 1% chance, and then act upon it?
Exactly! “Even if there’s a 1% chance” on its own is a poor argument, “I am pretty confident there’s at least a 1% chance and therefore I’m taking action” is totally reasonable
These arguments appeal to phenomenal stakes implying that, using expected value reasoning, even a very small probability of the bad thing happening means we should try to reduce the risk, provided there is some degree of tractability in doing so.
To be clear, the argument in my post is that we only need the argument to work for very small=1% or 0.1%, not eg 10^-10. I am much more skeptical about arguments involving 10^-10 like probabilities
These arguments appeal to phenomenal stakes implying that, using expected value reasoning, even a very small probability of the bad thing happening means we should try to reduce the risk, provided there is some degree of tractability in doing so.
Is the reason you dismiss such arguments because:
You reject EV reasoning if the probabilities are sufficiently small (i.e. anti-fanaticism)
There are issues with this response e.g. here to give one
You think the probabilities cited are too arbitrary so you don’t take the argument seriously
But the specific numerical probabilities themselves are not super important in longtermist cases. Usually, because of the astronomical stakes, the important thing is that there is a “non-negligible” probability decrease we can achieve. Much has been written about why there might be non-negligible x-risk from AI or biosecurity etc. and that there are things we can do to reduce this risk. The actual numerical probabilities themselves are insanely hard to estimate, but it’s also not that important to do so.
You reject the arguments that we can reduce x-risk in a non-negligible way (e.g. from AI, biosecurity etc.)
You reject phenomenal stakes
Some other reason?
When people say “even if there’s a 1% chance” without providing any other evidence, I have no reason to believe there is a 1% chance vs 0.001% or a much smaller number.
I think you’re getting hung up on the specific numbers which I personally think are irrelevant. What about if one says something like:
“Given arguments put forward by leading AI researchers such as Eliezer Yudkowsky, Nick Bostrom, Stuart Russell and Richard Ngo, it seems that there is a very real possibility that we will create superintelligent AI one day. Furthermore, we are currently uncertain about how we can ensure such an AI would be aligned to our interests. A superintelligent AI that is not aligned to our interests could clearly bring about highly undesirable states of the world that could persist for a very long time, if not forever. There seem to be tractable ways to increase the probability that AI will be aligned to our interests, such as through alignment research or policy/regulation meaning such actions are a very high priority”.
There’s a lot missing from that but I don’t want to cover all the object-level arguments here. My point is that waving it all away by saying that a specific probability someone has cited is arbitrary seems wrong to me. You would need to counter the object-level arguments put forward by leading researchers. Do you find those arguments weak?
Ah gotcha. So you’re specifically objecting to people who say ‘even if there’s a 1% chance’ based on vague intuition, and not to people who think carefully about AI risk, conclude that there’s a 1% chance, and then act upon it?
Exactly! “Even if there’s a 1% chance” on its own is a poor argument, “I am pretty confident there’s at least a 1% chance and therefore I’m taking action” is totally reasonable
To be clear, the argument in my post is that we only need the argument to work for very small=1% or 0.1%, not eg 10^-10. I am much more skeptical about arguments involving 10^-10 like probabilities