Hi Sean, thank you for engaging with the essay. Glad you appreciate it.
I think the reason psychopaths don’t dominate society—ignoring the fact that they are found disproportionately among CEOs—is a few reasons.
There’s just not that many of them. They’re only about 2% of the population, not enough to form a dominant block.
They don’t cooperate with each other just because they’re all psychos. Cooperation, or lack thereof, is a big deal.
They eventually die.
They don’t exactly have their shit together for the most part—they can be emotional and driven by desires, all of which gets in the way of efficiently pursuing goals.
Note that a superintelligent AGI would not be affected by any of the above.
I think the issue with a guardian AGI is just that it will be limited by morality. In my essay I talk about it as Superman vs Zod. Zod can just fight, but Superman needs to fight and protect, and it’s a real crutch. The only reason Zod doesn’t win in the comics is because the story demands it.
Beyond that, creating a superintelligent guardian AGI, that both functions correctly right away without going rogue and before other AGIs emerge naturally, is a real tall order. It would take so many unlikely things just falling into place. Global cooperation, perfect programming, getting there before an amoral AGI does etc. I go into the difficulty of alignment in great detail in my first essay. Feel free to give it a read if you’ve a mind to.
Thanks for the reply. I still like to hold out hope in the face of what seems like long odds—I’d rather go down swinging if there’s any non-zero chance of success than succumb to fatalism and be defeated without even trying.
This is exactly why I’m writing these essay. This is my attempt at a haymaker. Although I would equate it less to going down swinging and more to kicking my feet and trying to get free after the noose has already gone tight around my neck and hauled me off the ground.
Hi Sean, thank you for engaging with the essay. Glad you appreciate it.
I think the reason psychopaths don’t dominate society—ignoring the fact that they are found disproportionately among CEOs—is a few reasons.
There’s just not that many of them. They’re only about 2% of the population, not enough to form a dominant block.
They don’t cooperate with each other just because they’re all psychos. Cooperation, or lack thereof, is a big deal.
They eventually die.
They don’t exactly have their shit together for the most part—they can be emotional and driven by desires, all of which gets in the way of efficiently pursuing goals.
Note that a superintelligent AGI would not be affected by any of the above.
I think the issue with a guardian AGI is just that it will be limited by morality. In my essay I talk about it as Superman vs Zod. Zod can just fight, but Superman needs to fight and protect, and it’s a real crutch. The only reason Zod doesn’t win in the comics is because the story demands it.
Beyond that, creating a superintelligent guardian AGI, that both functions correctly right away without going rogue and before other AGIs emerge naturally, is a real tall order. It would take so many unlikely things just falling into place. Global cooperation, perfect programming, getting there before an amoral AGI does etc. I go into the difficulty of alignment in great detail in my first essay. Feel free to give it a read if you’ve a mind to.
Thanks for the reply. I still like to hold out hope in the face of what seems like long odds—I’d rather go down swinging if there’s any non-zero chance of success than succumb to fatalism and be defeated without even trying.
This is exactly why I’m writing these essay. This is my attempt at a haymaker. Although I would equate it less to going down swinging and more to kicking my feet and trying to get free after the noose has already gone tight around my neck and hauled me off the ground.