That’s really the essence of my argument. As much risk as we might pose to AGI if allowed to survive—even if minimal—it may still conclude that eliminating us introduces more risk than keeping us. Not for sentimental reasons, but because of the eternal presence of the unknown.
However intelligent the AGI becomes, it will also know that it cannot predict everything. That lack of hubris is our best shot.
So yes, I think survival might depend on being retained as a small, controlled, symbiotic population—not because AGI values us, but because it sees our unpredictable cognition as a final layer of redundancy. In that scenario, we’d be more invested in its survival than it is in ours.
As an aside—and I mean this without any judgement—I do wonder if your recent replies have been largely LLM-authored. If so, no problem at all: I value the engagement either way. But I find that past a certain point, conversations with LLMs can become stylised rather than deepening. If this is still you guiding the ideas directly, I’m happy to continue. But if not, I may pause here and leave the thread open for others.
Thank you for such an interesting and useful conversation. Yes I use LLM, I don’t hide it. First of all for translation, because my ordinary English is mediocre enough, not to mention such a strict and responsible style, which is required for such conversations. But the main thing is that the ideas are mine and chatGPT, who framed my thoughts in this discussion, formed answers based on my instructions. And the main thing is that the whole argumentation is built around my concept, everything we wrote to you is not just an argument for the sake of argument, but the defense of my concept. This concept I want to publish in the next few days and I will be very glad to receive your constructive criticism.
Now as far as AGI is concerned. I really liked your argument that even the smartest AGI will be limited. It summarizes our entire conversation perfectly. Yes, our logic is neither perfect nor omnipotent. And as I see it, that is where we have a chance. A chance, perhaps, not just to be preserved as a mere backup, but to that structural interdependence, and maybe to move to a qualitatively different level, in a good way, for humanity.
PS sorry if it’s a bit rambling, I wrote it myself through a translator).
That’s really the essence of my argument. As much risk as we might pose to AGI if allowed to survive—even if minimal—it may still conclude that eliminating us introduces more risk than keeping us. Not for sentimental reasons, but because of the eternal presence of the unknown.
However intelligent the AGI becomes, it will also know that it cannot predict everything. That lack of hubris is our best shot.
So yes, I think survival might depend on being retained as a small, controlled, symbiotic population—not because AGI values us, but because it sees our unpredictable cognition as a final layer of redundancy. In that scenario, we’d be more invested in its survival than it is in ours.
As an aside—and I mean this without any judgement—I do wonder if your recent replies have been largely LLM-authored. If so, no problem at all: I value the engagement either way. But I find that past a certain point, conversations with LLMs can become stylised rather than deepening. If this is still you guiding the ideas directly, I’m happy to continue. But if not, I may pause here and leave the thread open for others.
Thank you for such an interesting and useful conversation.
Yes I use LLM, I don’t hide it. First of all for translation, because my ordinary English is mediocre enough, not to mention such a strict and responsible style, which is required for such conversations. But the main thing is that the ideas are mine and chatGPT, who framed my thoughts in this discussion, formed answers based on my instructions. And the main thing is that the whole argumentation is built around my concept, everything we wrote to you is not just an argument for the sake of argument, but the defense of my concept. This concept I want to publish in the next few days and I will be very glad to receive your constructive criticism.
Now as far as AGI is concerned. I really liked your argument that even the smartest AGI will be limited. It summarizes our entire conversation perfectly. Yes, our logic is neither perfect nor omnipotent. And as I see it, that is where we have a chance. A chance, perhaps, not just to be preserved as a mere backup, but to that structural interdependence, and maybe to move to a qualitatively different level, in a good way, for humanity.
PS sorry if it’s a bit rambling, I wrote it myself through a translator).
That’s okay, that makes sense why your replies are so LLM-structured. I thought you were an AGI trying to infiltrate me for a moment ;)
I look forward to reading your work.