Fair point. It’s more nuanced later on: “Almost all of the conversations about risk have to do with the potential consequences of AI systems pursuing goals that diverge from what they were programmed to do and that are not in the interests of humans. Everyone can get behind this notion of AI alignment and safety, but this is only one side of the danger. Imagine what could unfold if AI does do what humans want.”
Fair point. It’s more nuanced later on: “Almost all of the conversations about risk have to do with the potential consequences of AI systems pursuing goals that diverge from what they were programmed to do and that are not in the interests of humans. Everyone can get behind this notion of AI alignment and safety, but this is only one side of the danger. Imagine what could unfold if AI does do what humans want.”