Thanks again for such a generous and thoughtful comment.
You’re right to question the epistemic weight I give to AI agreement. I’ve instructed my own GPT to challenge me at every turn, but even then, it often feels more like a collaborator than a critic. That in itself can be misleading. However, what has given me pause is when others run my arguments through separate LLMs -prompted specifically to find logical flaws -and still return with little more than peripheral concerns. While no argument is beyond critique, I think the core premises I’ve laid out are difficult to dispute, and the logic that follows from them, disturbingly hard to unwind.
By contrast, most resistance I’ve encountered comes from people who haven’t meaningfully engaged with the work. I received a response just yesterday from one of the most prominent voices in AI safety that began with, “Without reading the paper, and just going on your brief description…” It’s hard not to feel disheartened when even respected thinkers dismiss a claim without examining it—especially when the claim is precisely that the community is underestimating the severity of systemic pressures. If those pressures were taken seriously, alignment wouldn’t be seen as difficult—it would be recognised as structurally impossible.
I agree with you that the shape of the optimisation landscape matters. And I also agree that the collapse isn’t driven by malevolence—it’s driven by momentum, by fragmented incentives, by game theory. That’s why I believe not just capitalism, but all forms of competitive pressure must end if humanity is to survive AGI. Because as long as any such pressures exist, some actor somewhere will take the risk. And the AGI that results will bypass safety, not out of spite, but out of pure optimisation.
It’s why I keep pushing these ideas, even if I believe the fight is already lost. What kind of man would I be if I saw all this coming and did nothing? Even in the face of futility, I think it’s our obligation to try. To at least force the conversation to happen properly—before the last window closes.
I completely understand your position — and I respect the intellectual honesty with which you’re pursuing this line of argument. I don’t disagree with the core systemic pressures you describe.
That said, I wonder whether the issue is not competition itself, but the shape and direction of that competition. Perhaps there’s a possibility — however slim — that competition, if deliberately structured and redirected, could become a survival strategy rather than a death spiral.
That’s the hypothesis I’ve been exploring, and I recently outlined it in a post here on the Forum. If you’re interested, I’d appreciate your critical perspective on it.
Either way, I value this conversation. Few people are willing to follow these questions to their logical ends.
Thanks again for such a generous and thoughtful comment.
You’re right to question the epistemic weight I give to AI agreement. I’ve instructed my own GPT to challenge me at every turn, but even then, it often feels more like a collaborator than a critic. That in itself can be misleading. However, what has given me pause is when others run my arguments through separate LLMs -prompted specifically to find logical flaws -and still return with little more than peripheral concerns. While no argument is beyond critique, I think the core premises I’ve laid out are difficult to dispute, and the logic that follows from them, disturbingly hard to unwind.
By contrast, most resistance I’ve encountered comes from people who haven’t meaningfully engaged with the work. I received a response just yesterday from one of the most prominent voices in AI safety that began with, “Without reading the paper, and just going on your brief description…” It’s hard not to feel disheartened when even respected thinkers dismiss a claim without examining it—especially when the claim is precisely that the community is underestimating the severity of systemic pressures. If those pressures were taken seriously, alignment wouldn’t be seen as difficult—it would be recognised as structurally impossible.
I agree with you that the shape of the optimisation landscape matters. And I also agree that the collapse isn’t driven by malevolence—it’s driven by momentum, by fragmented incentives, by game theory. That’s why I believe not just capitalism, but all forms of competitive pressure must end if humanity is to survive AGI. Because as long as any such pressures exist, some actor somewhere will take the risk. And the AGI that results will bypass safety, not out of spite, but out of pure optimisation.
It’s why I keep pushing these ideas, even if I believe the fight is already lost. What kind of man would I be if I saw all this coming and did nothing? Even in the face of futility, I think it’s our obligation to try. To at least force the conversation to happen properly—before the last window closes.
I completely understand your position — and I respect the intellectual honesty with which you’re pursuing this line of argument. I don’t disagree with the core systemic pressures you describe.
That said, I wonder whether the issue is not competition itself, but the shape and direction of that competition.
Perhaps there’s a possibility — however slim — that competition, if deliberately structured and redirected, could become a survival strategy rather than a death spiral.
That’s the hypothesis I’ve been exploring, and I recently outlined it in a post here on the Forum.
If you’re interested, I’d appreciate your critical perspective on it.
Either way, I value this conversation. Few people are willing to follow these questions to their logical ends.
deleted