I don’t think the comparison with human alignment being successful is fair.
If you mean that most people don’t go on to be antisocial etc.. which is comparable to non-X AI risk, the yes perhaps simple techniques like a ‘good upbringing’ are working on humans. A lot of it however is just baked in by evolution regardless. If you mean that most humans don’t go on to become X-risks, then that mostly has to do with lack of capability, rather than them being aligned. There are very few people I would trust with 1000x human abilities, assuming everyone else remains a 1x human.
I don’t think the comparison with human alignment being successful is fair.
If you mean that most people don’t go on to be antisocial etc.. which is comparable to non-X AI risk, the yes perhaps simple techniques like a ‘good upbringing’ are working on humans. A lot of it however is just baked in by evolution regardless. If you mean that most humans don’t go on to become X-risks, then that mostly has to do with lack of capability, rather than them being aligned. There are very few people I would trust with 1000x human abilities, assuming everyone else remains a 1x human.