Luke, thank you for always being so kind :)) I very much appreciate you sharing your thoughts!!
“sometimes people exclude short-term actions because it’s not ‘longtermist enough’” That’s a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment. I’m not even totally sure where this sense comes from, other than that fairness research is really not talked about much at all in safety spaces.
Glad you brought this up as it’s definitely important to field/community building.
“There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment.
Now don’t go setting me off about this topic! You know what I’m like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc. But that’s a conversation for another post :D
Luke, thank you for always being so kind :)) I very much appreciate you sharing your thoughts!!
“sometimes people exclude short-term actions because it’s not ‘longtermist enough’”
That’s a really good point on how we see longtermism being pursued in practice. I would love to investigate whether others are feeling this way. I have certainly felt it myself in AI Safety. There’s some vague sense that current-day concerns (like algorithmic bias) are not really AI Safety research. Although I’ve talked to some who think addressing these issues first is key in building towards alignment. I’m not even totally sure where this sense comes from, other than that fairness research is really not talked about much at all in safety spaces.
Glad you brought this up as it’s definitely important to field/community building.
Now don’t go setting me off about this topic! You know what I’m like. Suffice to say I think combatting social issues like algorithmic bias are potentially the only way to realistically begin the alignment process. Build transparency etc. But that’s a conversation for another post :D