Yes I was being sincere. I might have missed some meta thing here as obviously I’m not steeped in AI alignment. Perhaps Trevor intended to reply on another comment but mistakenly replied here?
Oops! I’m off my groove today, sorry. I’m going to go read up on some of the conflict theory vs. mistake theory literature on my backlog in order to figure out what went wrong and how to prevent it (e.g. how human variation and inferential distance causes very strange mistakes due to miscommunication).
I assumed Nick was being sincere?
Yes I was being sincere. I might have missed some meta thing here as obviously I’m not steeped in AI alignment. Perhaps Trevor intended to reply on another comment but mistakenly replied here?
Oops! I’m off my groove today, sorry. I’m going to go read up on some of the conflict theory vs. mistake theory literature on my backlog in order to figure out what went wrong and how to prevent it (e.g. how human variation and inferential distance causes very strange mistakes due to miscommunication).