It appears that the phrase “Friendly AI research” has been replaced by “AI alignment research”. Why was that term picked?
Luke talks about the pros and cons of various terms here. Then, long story short, we asked Stuart Russell for some thoughts and settled on “AI alignment” (his suggestion, IIRC).
It appears that the phrase “Friendly AI research” has been replaced by “AI alignment research”. Why was that term picked?
Luke talks about the pros and cons of various terms here. Then, long story short, we asked Stuart Russell for some thoughts and settled on “AI alignment” (his suggestion, IIRC).