Hey Holly, I hope youāre doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and Iām sorry for that. For what itās worth, youāve always seemed like someone who has integrity to me.[1]
Maybe itās because Iām not in the thick of Bay Culture or super focused on AI x-risk, but I donāt quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know youāre incredibly committed to Pause AI, so I hope you donāt think what Iām going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?
The mix-up itself was a mistake sure, but every mistake isnāt a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I donāt really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not itās the charter.
I donāt really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesnāt look as bad as it might feel to you right now.
If he publishes then Iāll read it, but my prior is sceptical, especially given his apparent suggestions Turns out Mikhail published as I was writing! Iāll give that a read
Thanks. I donāt think I feel too bad about the mistake or myself . I know I didnāt do it on purpose and wasnāt negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I donāt truly understand, and Iām pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a ādeontologicalā violation, as I did when I read it.
Hey Holly, I hope youāre doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and Iām sorry for that. For what itās worth, youāve always seemed like someone who has integrity to me.[1]
Maybe itās because Iām not in the thick of Bay Culture or super focused on AI x-risk, but I donāt quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know youāre incredibly committed to Pause AI, so I hope you donāt think what Iām going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?
The mix-up itself was a mistake sure, but every mistake isnāt a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I donāt really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not itās the charter.
I donāt really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesnāt look as bad as it might feel to you right now.
Even when I disagree with you!
If he publishes then Iāll read it, but my prior is sceptical, especially given his apparent suggestionsTurns out Mikhail published as I was writing! Iāll give that a readI know you want to hold yourself to a higher standard, but still.
Thanks. I donāt think I feel too bad about the mistake or myself . I know I didnāt do it on purpose and wasnāt negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I donāt truly understand, and Iām pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a ādeontologicalā violation, as I did when I read it.