Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore.
It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore.
It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]