“There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment.
Now don’t go setting me off about this topic! You know what I’m like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc. But that’s a conversation for another post :D
Now don’t go setting me off about this topic! You know what I’m like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc. But that’s a conversation for another post :D