It’s always seemed to me that the most dangerous person to have in charge of an AGI would be someone with an extreme desire to optimise the world and an extreme level of overconfidence in their own intelligence and abilities. It’s quite possible that a dangerous AI will only be made if it is done so deliberately by such a figure.
Unfortunately I feel like this describes a few prominent EA figures pretty well. I believe there are definitely people in this community that should be kept as far away from AI development as possible, and it is entirely possible that AI safety research is net negative for this reason.
It’s always seemed to me that the most dangerous person to have in charge of an AGI would be someone with an extreme desire to optimise the world and an extreme level of overconfidence in their own intelligence and abilities. It’s quite possible that a dangerous AI will only be made if it is done so deliberately by such a figure.
Unfortunately I feel like this describes a few prominent EA figures pretty well. I believe there are definitely people in this community that should be kept as far away from AI development as possible, and it is entirely possible that AI safety research is net negative for this reason.