I think an AGI would be much better at creating arguments for why humanity should not be eliminated.
If an agi is incapable of creating these arguments itself I wonder if it capable enough to destroy humanity.
I think the thing to worry about more is that an agi correctly determines that humanity, or most of humanity needs to be destroyed (ex. Agi cares about all life and all humans murder there face mites so they must be stopped).
But is that really all that bad?
Hi Jeremy
I think an AGI would be much better at creating arguments for why humanity should not be eliminated. If an agi is incapable of creating these arguments itself I wonder if it capable enough to destroy humanity.
I think the thing to worry about more is that an agi correctly determines that humanity, or most of humanity needs to be destroyed (ex. Agi cares about all life and all humans murder there face mites so they must be stopped). But is that really all that bad?