AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots: …
I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from what’s stated there. For example, I think I’m more inclined towards the following generalised version of that story:
AI systems will control the future or simply destroy our future, and how our actions influence the way that plays out is the only thing about our time that will matter in the long run. Major subplots: …
This version of the story could capture:
The possibility that the AI systems rapidly lead to human extinction but then don’t really cause any other major things in particular, and have no [other] goals
I feel like it’d be odd to say that that’s a case where the AI systems “control the future”
The possibility that the AI systems who cause these consequences aren’t really “agents” in a standard sense
The possibility that what matters about our time is not simply “which [agents] we create”, but also things like when and how we deploy them and what incentive structures we put them in
One thing that that “generalised story” still doesn’t clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent that’s aligned with the actor, or a set of AI services/tools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)
I think there are plausible and plausibly important plots similar to this, and subplots similar to the subplots below it, but that differ in a few ways from what’s stated there. For example, I think I’m more inclined towards the following generalised version of that story:
This version of the story could capture:
The possibility that the AI systems rapidly lead to human extinction but then don’t really cause any other major things in particular, and have no [other] goals
I feel like it’d be odd to say that that’s a case where the AI systems “control the future”
The possibility that the AI systems who cause these consequences aren’t really “agents” in a standard sense
The possibility that what matters about our time is not simply “which [agents] we create”, but also things like when and how we deploy them and what incentive structures we put them in
One thing that that “generalised story” still doesn’t clearly capture is the potential significance of how humans use the AI systems. E.g., a malicious human actor or state could use an AI agent that’s aligned with the actor, or a set of AI services/tools, in ways that cause major harm. (Or conversely, humans could use these things in ways that cause major benefits.)