Instead of trying to refute Alice from general principles, I think Bob should instead point to concrete reasons for optimism (for example, Bob could say “for reasons A, B, and C it is likely that we can coordinate on not building AGI for the next 40 years and solve alignment in the meantime”).
As an aside to the main point of your post, I think Bob arrived at his position by default. I suspect that part of it comes from the fact that the bulk of human experiences deal with natural systems. These natural systems are often robust and could be described as default-success. Take human interaction for instance: we assume that any stranger we meet is not a sociopath, because they rarely are. This system is robust and default-success because anti-social behavior is maladaptive. Because AI is so easy for our brains to place in the category of humans, we might by extension put it in the “natural system”-box. With that comes the assumption that it’s behavior reverts to default-success. Have you ever been irritated at your computer because it freezes? This irrational response could be traced to us being angry that the computer doesn’t follow the rules of behavior that have to be followed when in the (human) box that we erroneously placed it in.
blueberry—this is a very good point about humans applying their ‘default-success’ heuristic (regarding social interactions with mostly-non-psychopathic humans) inappropriately to their potential interactions with AIs.
As an aside to the main point of your post, I think Bob arrived at his position by default. I suspect that part of it comes from the fact that the bulk of human experiences deal with natural systems. These natural systems are often robust and could be described as default-success. Take human interaction for instance: we assume that any stranger we meet is not a sociopath, because they rarely are. This system is robust and default-success because anti-social behavior is maladaptive. Because AI is so easy for our brains to place in the category of humans, we might by extension put it in the “natural system”-box. With that comes the assumption that it’s behavior reverts to default-success. Have you ever been irritated at your computer because it freezes? This irrational response could be traced to us being angry that the computer doesn’t follow the rules of behavior that have to be followed when in the (human) box that we erroneously placed it in.
blueberry—this is a very good point about humans applying their ‘default-success’ heuristic (regarding social interactions with mostly-non-psychopathic humans) inappropriately to their potential interactions with AIs.