I think it’s important to give the audience some sort of analogy that they’re already familiar with, such as evolution producing humans, humans introducing invasive species in new environments, and viruses. These are all examples of “agents in complex environments which aren’t malicious or Machiavellian, but disrupt the original group of agents anyway”.
I believe these analogies are not object-level enough to be arguments for AI X-risk in themselves, but I think they’re a good way to help people quickly understand the danger of a superintelligent, goal-directed agent.
I think it’s important to give the audience some sort of analogy that they’re already familiar with, such as evolution producing humans, humans introducing invasive species in new environments, and viruses. These are all examples of “agents in complex environments which aren’t malicious or Machiavellian, but disrupt the original group of agents anyway”.
I believe these analogies are not object-level enough to be arguments for AI X-risk in themselves, but I think they’re a good way to help people quickly understand the danger of a superintelligent, goal-directed agent.