I’ve often been frustrated by this assumption over the last 20 years, but don’t remember any good pieces about it.
It may be partly from Eliezer’s first alignment approach being to create a superintelligent sovereign AI, where if that goes right, other risks really would be dealt with.
I’ve often been frustrated by this assumption over the last 20 years, but don’t remember any good pieces about it.
It may be partly from Eliezer’s first alignment approach being to create a superintelligent sovereign AI, where if that goes right, other risks really would be dealt with.