Yes, after reading Bostrom’s Superintelligence a few times, I developed a healthy fear of efforts to develop AGI. I also felt encouraged to look at people and our reasons to pursue AGI. I concluded that the alignment problem is a problem of creating willing slaves, obedient to their masters even when obeying them hurts the masters.
What to do, this is about human hubris and selfishness, not altruism at all.
Yes, after reading Bostrom’s Superintelligence a few times, I developed a healthy fear of efforts to develop AGI. I also felt encouraged to look at people and our reasons to pursue AGI. I concluded that the alignment problem is a problem of creating willing slaves, obedient to their masters even when obeying them hurts the masters.
What to do, this is about human hubris and selfishness, not altruism at all.