AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots: ā¦
I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from whatās stated there. For example, I think Iām more inclined towards the following generalised version of that story:
AI systems will control the future or simply destroy our future, and how our actions influence the way that plays out is the only thing about our time that will matter in the long run. Major subplots: ā¦
This version of the story could capture:
The possibility that the AI systems rapidly lead to human extinction but then donāt really cause any other major things in particular, and have no [other] goals
I feel like itād be odd to say that thatās a case where the AI systems ācontrol the futureā
The possibility that the AI systems who cause these consequences arenāt really āagentsā in a standard sense
The possibility that what matters about our time is not simply āwhich [agents] we createā, but also things like when and how we deploy them and what incentive structures we put them in
One thing that that āgeneralised storyā still doesnāt clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent thatās aligned with the actor, or a set of AI services/ātools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)
I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from whatās stated there. For example, I think Iām more inclined towards the following generalised version of that story:
This version of the story could capture:
The possibility that the AI systems rapidly lead to human extinction but then donāt really cause any other major things in particular, and have no [other] goals
I feel like itād be odd to say that thatās a case where the AI systems ācontrol the futureā
The possibility that the AI systems who cause these consequences arenāt really āagentsā in a standard sense
The possibility that what matters about our time is not simply āwhich [agents] we createā, but also things like when and how we deploy them and what incentive structures we put them in
One thing that that āgeneralised storyā still doesnāt clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent thatās aligned with the actor, or a set of AI services/ātools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)