When people say “even if there’s a 1% chance” without providing any other evidence, I have no reason to believe there is a 1% chance vs 0.001% or a much smaller number.
I think you’re getting hung up on the specific numbers which I personally think are irrelevant. What about if one says something like:
“Given arguments put forward by leading AI researchers such as Eliezer Yudkowsky, Nick Bostrom, Stuart Russell and Richard Ngo, it seems that there is a very real possibility that we will create superintelligent AI one day. Furthermore, we are currently uncertain about how we can ensure such an AI would be aligned to our interests. A superintelligent AI that is not aligned to our interests could clearly bring about highly undesirable states of the world that could persist for a very long time, if not forever. There seem to be tractable ways to increase the probability that AI will be aligned to our interests, such as through alignment research or policy/regulation meaning such actions are a very high priority”.
There’s a lot missing from that but I don’t want to cover all the object-level arguments here. My point is that waving it all away by saying that a specific probability someone has cited is arbitrary seems wrong to me. You would need to counter the object-level arguments put forward by leading researchers. Do you find those arguments weak?
Ah gotcha. So you’re specifically objecting to people who say ‘even if there’s a 1% chance’ based on vague intuition, and not to people who think carefully about AI risk, conclude that there’s a 1% chance, and then act upon it?
Exactly! “Even if there’s a 1% chance” on its own is a poor argument, “I am pretty confident there’s at least a 1% chance and therefore I’m taking action” is totally reasonable
When people say “even if there’s a 1% chance” without providing any other evidence, I have no reason to believe there is a 1% chance vs 0.001% or a much smaller number.
I think you’re getting hung up on the specific numbers which I personally think are irrelevant. What about if one says something like:
“Given arguments put forward by leading AI researchers such as Eliezer Yudkowsky, Nick Bostrom, Stuart Russell and Richard Ngo, it seems that there is a very real possibility that we will create superintelligent AI one day. Furthermore, we are currently uncertain about how we can ensure such an AI would be aligned to our interests. A superintelligent AI that is not aligned to our interests could clearly bring about highly undesirable states of the world that could persist for a very long time, if not forever. There seem to be tractable ways to increase the probability that AI will be aligned to our interests, such as through alignment research or policy/regulation meaning such actions are a very high priority”.
There’s a lot missing from that but I don’t want to cover all the object-level arguments here. My point is that waving it all away by saying that a specific probability someone has cited is arbitrary seems wrong to me. You would need to counter the object-level arguments put forward by leading researchers. Do you find those arguments weak?
Ah gotcha. So you’re specifically objecting to people who say ‘even if there’s a 1% chance’ based on vague intuition, and not to people who think carefully about AI risk, conclude that there’s a 1% chance, and then act upon it?
Exactly! “Even if there’s a 1% chance” on its own is a poor argument, “I am pretty confident there’s at least a 1% chance and therefore I’m taking action” is totally reasonable