Hey Holly, I hope you’re doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and I’m sorry for that. For what it’s worth, you’ve always seemed like someone who has integrity to me.[1]
Maybe it’s because I’m not in the thick of Bay Culture or super focused on AI x-risk, but I don’t quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know you’re incredibly committed to Pause AI, so I hope you don’t think what I’m going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?
The mix-up itself was a mistake sure, but every mistake isn’t a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I don’t really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not it’s the charter.
I don’t really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesn’t look as bad as it might feel to you right now.
If he publishes then I’ll read it, but my prior is sceptical, especially given his apparent suggestions Turns out Mikhail published as I was writing! I’ll give that a read
Thanks. I don’t think I feel too bad about the mistake or myself . I know I didn’t do it on purpose and wasn’t negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I don’t truly understand, and I’m pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a “deontological” violation, as I did when I read it.
Hey Holly, I hope you’re doing ok. I think the Bay Area atmosphere might be particular unhealthy and tough around this issue atm, and I’m sorry for that. For what it’s worth, you’ve always seemed like someone who has integrity to me.[1]
Maybe it’s because I’m not in the thick of Bay Culture or super focused on AI x-risk, but I don’t quite see why Mikhail reacted so strongly (especially the language around deception? Or the suggestions to have the EA Community police Pause AI??) to this mistake. I also know you’re incredibly committed to Pause AI, so I hope you don’t think what I’m going to say is insensitive, but I even some of your own language here is a bit storm-in-a-teacup?
The mix-up itself was a mistake sure, but every mistake isn’t a failure. You clearly went out of your way to make sure that initial incorrect impression was corrected. I don’t really see how that could meet a legal slander bar, and I think many people will find OpenAI reneging on a policy to work with the Pentagon highly concerning whether or not it’s the charter.
I don’t really want to have a discussed about California defamation law. Mainly, I just wanted to reach out and offer some support, and say that from my perspective, it doesn’t look as bad as it might feel to you right now.
Even when I disagree with you!
If he publishes then I’ll read it, but my prior is sceptical, especially given his apparent suggestionsTurns out Mikhail published as I was writing! I’ll give that a readI know you want to hold yourself to a higher standard, but still.
Thanks. I don’t think I feel too bad about the mistake or myself . I know I didn’t do it on purpose and wasn’t negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I don’t truly understand, and I’m pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a “deontological” violation, as I did when I read it.