A friend advised me to provide the context that I had spent maybe 6 hours helping Mikhail with his moratorium-related project (a website that I was going over for clarity as a native English speaker) and perhaps an additional 8 hours over the last few months answering questions about the direction I had taken with the protests. Mikhail had a number of objections which required a lot of labor on my part to understand to his satisfaction, and he usually did not accept my answers when I gave them but continued to argue with me, either straight up or by insisting I didn’t really understand his argument or was contradicting myself somehow.
After enough of this, I did not think it was worth my time to engage further (EDIT: on the general topic of this post, protest messaging for 2/12— we continued to be friends and talk about other things), and I told him that I made my decisions and didn’t need any more of his input a few weeks before the 2⁄12 protest. He may have had useful info that I didn’t get out of him, and that’s a pity because there are a few things that I would absolutely have done differently if I had realized at the time (such as removing language that implied OpenAI was being hypocritical that didn’t apply when I realized we were only talking about the usage policies changing but which didn’t register to me as needing to be updated when I corrected the press release) but I would make the same call again about how to spend my time.
I will not be replying to replies on this comment.
Holly_Elmore
I’m concerned I may not have comported myself well in these comments. When Mikhail brought this post to me as a draft it was emotionally difficult for me because of what I interpreted as questioning my integrity.
Unfortunately, the path I’m taking— which I believe is the right path for me to be taking— is probably going to involve lots more criticism, whether I consider it fair or not. I am going to have to handle it with more aplomb.
So I am not going to comment on this post anymore. I am going to practice taking the hit and moving on because that’s just sometimes how life is and it’s a cost of doing business in a visible advocacy role.
(Edited to remove blaming language.)
You said a lot of things to me, not all of which I remember, but the above were two of them. I knew I didn’t get everything you wanted me to get about what you were saying, but I felt that I understood enough to know what the cruxes were and where I stood on them.
You said:
I told you pretty directly that for people who are not aware of the context what you wrote might be misleading, because you omitted crucial details
I said:
which I took to be basically that I wasn’t including enough details in the promotional materials so people wouldn’t have a picture he considered accurate enough
Are these not the same thing?
I found that sentence unclear. It’s poorly written and I did not know what you meant by it. In context you were not saying I had good intentions— you declined to remove the “deceptive” language earlier because you thought I could have been deceptive.
There was honestly no aspect of unwillingness to correct the broader story of the protest. It just didn’t even occur to me that should be done. It seems like you guys don’t believe this, but I didn’t think it being the usage policy instead of the charter made a difference to the small ask of not working with militaries. It made a huge difference in the severity of the accusation toward OpenAI, and what I had sort of retconned myself into thinking was the severity of the hypocrisy/transgression, but either way starting to work with militaries was a good specific development to call attention to and ask them to reverse.
There was definitely language and a framing that was predicated on the idea they were being hypocritical, and if I were thinking more clearly I would have scrubbed that when I realized we were only talking about the usage policy. There are a lot of things I would have changed looking back. Mikhail says he tried to tell me something like this but I found his critiques too confusing (like I thought he was saying mostly that it wasn’t bad to work with the military bc it was cybersecurity, where to me that wasn’t the crux) and so those changes did not occur to me.
I mainly did not realize these things because I was really busy with logistics, not because I needed to be in soldier mindset to do the protest. (EDIT: I mean, maybe some soldier mindset is required and takes a toll, but I don’t think it would have been an issue here. If someone had presented me with a press release with all the revisions I mentioned above to send out as a correction instead of the one I sent out, I would have thought it was better and sent it instead. The problem was more that I panicked and wanted to correct the mistake immediately and wasn’t thinking of other things that should be corrected because of it.) Mikhail may have felt I was being soldier-y bc I wouldn’t spend more time trying to figure out what he was talking about, but that had more to do with me thinking I had basically understood his point (which I took to be basically that I wasn’t including enough details in the promotional materials so people wouldn’t have a picture he considered accurate enough) and just disagreed with it (I thought space was limited and many rationalists do not appreciate the cost of extra words and thoughts in advocacy communication).
I appreciate this suggestion and, until Mikhail commented below saying this was not the case, I thought it might be an English as a second language issue where be didn’t understand that “deception” indicates intent. I would have been placated if any accusation of intentionally creating false impressions were removed from the post.
I think perhaps I had curse of knowledge on this because I did not think people would assume the work was combat or weapons-related. I did a lot of thinking about the issue before formulating it as the small ask and I was probably pretty out of touch with how a naive reader would interpret it. My commenter/proofreaders are also immersed in the issue and didn’t offer the feedback that people would misunderstand it. In other communications I mentioned that weapons were still something the models cannot be used for (to say, “how do we know the next change won’t be that they can work on weapons?”).
I appreciate you framing your analysis without speculating on my motives or intent. I feel chagrined at having miscommunicated but I don’t feel hurt and attacked. I appreciate the information.
And then also at a high-level I feel like there was a rhetorical trick going on in the event messaging where I feel like the protest is organized around some “military bad because weapons bad” affect, without recognizing that the kind of relationship that OpenAI seems to have with the military seems pretty non-central for that kind of relationship (working on cybersecurity stuff, which I think by most people’s lights is quite different).
This was not at all intentional, although we were concerned about a future where there was more engagement with the military including with weapons, so I can see someone thinking we were saying they were working with weapons now if they weren’t paying close attention. Working on cybersecurity today is a foot in the door for more involvement with the military in the future, so I don’t think it’s so wrong to fear their involvement today because you don’t want AI weapons.
I do think that even after correcting the “charter”-mistake you continued to frame OpenAI usage policies as something that should be treated as some kind of contractual commitment of OpenAI that they walked back.
I see what you mean here, and I might even have done this a bit because of the conflation of the two documents in my head that didn’t get backpropagated away. I was rushing to correct the mistake and I didn’t really step back to reframe the whole thing holistically.
the final messaging being deceptive about the nature of OpenAI’s relationship with the Pentagon.
Have I got this right? You’re saying that me saying they have a contract with the Pentagon is deceptive because people will assume it’s to do with weapons?
You are accusing me of deception in a very important arena. That is predictably emotionally devastating. Your writing is confusing and many people will come away thinking that what you’re saying is that I lied. The very thing I think you’re claiming I did to OpenAI you are doing to me.
But you think you’re expressing yourself clearly. I, too, thought I was expressing myself clearly and honestly. If people were confused, I regret that, and I would prefer they weren’t. I want them to have accurate impressions. One constraint you may not understand with advocacy is that you can’t just keep adding more text like on LessWrong. There’s only so much that can fit and it has to be comprehensible to lots of different people. I thought it would be confusing to add a bunch of stuff saying that maybe this particular contract with the military would be good (I’m not saying that, but I’m granting your take) when my point was that we don’t want certain boundaries to be crossed at all, because working with the military on something benign is a foot in the door to something more. I don’t think you understand what I meant to communicate with the protest small ask, and that is a failure on my part as the organizer, but it’s not a deception. For some reason you seem convinced I was trying to trick people into coming to this protest because I wouldn’t pitch the messaging to rationalist sensibilities (I suspect this is what you would consider “deontologically good”), but if I had, almost no one else would have understood it or paid attention to it.
The draft you sent me opened with how people were misled about the “charter” and alleged that I didn’t change the protest enough after fixing that mistake. I think you’re just very unclear with your criticism (and what I understand I simply disagree with, as I did when we spoke about this before the protest) while throwing around loaded terms like “deceptive”, “misleading”, and “deontologically bad” that will give a very untrue impression of me.
I made a mistake—did you think something beyond the mistake was deceptive?
EDITED TO ADD: The accusation of “deception” is extremely hurtful and uncalled for and I really don’t appreciate it from you. I still can’t understand what Mikhail is getting at that was “deceptive” if he wasn’t referring to my mistake. He seems to think it was my responsibility that no one have any false ideas about the situation with OpenAI and the military after reading 2-3 paragraphs about the event and thinks that his misconceptions are what any person would think, so I should have specifically anticipated them.
Thanks. I don’t think I feel too bad about the mistake or myself . I know I didn’t do it on purpose and wasn’t negligent (I had informed proofreaders and commenters, none of whom caught it, either), and I know I sincerely tried everything to correct it. But it was really scary.
Mikhail has this abstruse criticism he is now insisting I don’t truly understand, and I’m pretty sure people reading his post will not understand it, either, instead taking away the message that I ~lied or otherwise made a “deontological” violation, as I did when I read it.
I ran a successful protest at OpenAI yesterday. Before the night was over, Mikhail Samin, who attended the protest, sent me a document to review that accused me of what sounds like a bait and switch and deceptive practices because I made an error in my original press release (which got copied as a description on other materials) and apparently didn’t address it to his satisfaction because I didn’t change the theme of the event more radically or cancel it.
My error: I made the stupidest kind of mistake when writing the press release weeks before the event. The event was planned as a generic OpenAI protest a ~month and half ahead of time. Then the story about the mysteriously revised usage policy and subsequent Pentagon contract arose and we decided to make rolling it back the “small ask” of this protest, which is usually a news peg and goes in media outreach materials like the press release. (The “big ask” is always “Pause AI” and that’s all that most onlookers will ever know about the messaging.) I quoted the OpenAI charter early on when drafting it, and then, in a kind of word mistake that is unfortunately common for me, started using the word “charter” for both the actual charter document and the usage policy document. It was unfortunately a semantic mistake, so proofreaders didn’t catch it. I also subsequently did this verbally in several places. I even kind of convinced myself from hearing my own mistaken language that OpenAI had violated a much more serious boundary– their actual guiding document– than they had. Making this kind of mistake is unfortunately a characteristic sort of error for me and making it in this kind of situation is one of my worst fears.How I handled it: I was horrified when I discovered the mistake because it conveyed a significantly different meaning than the true story, and, were it intentional, could have slandered OpenAI. I spent hours trying to track down every place I had said it and people who may have repeated it so it could be corrected. Where I corrected “charter” to “usage policy”, I left an asterisk explaining that an earlier version of the document had said “charter” in error. I told the protesters in the protester group chat (some protesters just see flyers or public events, so I couldn’t reach them all) right away about my mistake, which is when Mikhail heard about it, and explained changing the usage policy is a lot less bad than changing the charter, but the protest was still on, as it had been before the military story arose as the “small ask”. I have a lot of volunteer help, but I’m still a one woman show on stuff like media communications. I resolved to have my first employee go over all communications, not just volunteer proofreaders, so that someone super familiar with what we are doing can catch brainfart content errors that my brain was somehow blind to.
So, Mikhail seems to think I should have done more or not kept the no-military small ask. He’s going to publish something that really hurt my feelings because it reads as an accusation of lying or manipulation and calls for EA community level “mechanisms” to make sure that “unilateral action” (i.e. protests where I had to correct the description) can’t be taken because I am “a high status EA”.This is silly. I made a very unfortunate mistake that I feel terrible about and tried really hard to fix. That’s all. To be clear, PauseAI US is not an EA org and it wouldn’t submit to EA community mechanisms because that would not be appropriate. Our programming does not need EA approval and is not seeking it. I failed in my own personal standards of accuracy, the standards I will hold PauseAI US to, and I will not be offering any kind of EA guarantees on what PauseAI does. Just because I am an EA doesn’t sign my organization up for EA norms or make me especially accountable to Mikhail. I’m particularly done with Mikhail, in fact, after spending hours assisting him with his projects and trying to show him a good time when he visits Berkeley and answering to his endless nitpicks on Messenger and getting what feels like zero charity in return. During what should have been a celebration for a successful event, I was sobbing in the bathroom at the accusation that I (at least somewhat deliberately—not sure how exactly he would characterize his claim) misled the press and the EA community.
I made suggestions on his document and told him he can post it, so you may see it soon. I think it’s inflammatory and it will create pointless drama and he should know better, but I also know he would make a big deal and say I was suppressing the truth if I told him not to post it. I think coming at me with accusations of bad faith and loaded words like “deceptive” and “misleading” is shitty and I really do not want to be part of my own special struggle session on this forum. It wounds me because I feel the weight of what I’m trying to do, and my mistake scared me that I could stumble and cause harm. It’s a lot of stress to carry this mission forward against intense disagreement, and carrying it forward in the face of personal failure is even tougher for me. But I also know I handled my error honorably and did my best.- Holly Elmore used deceptive messaging to advance her project; we need mechanisms to avoid deontologically dubious plans by 13 Feb 2024 23:11 UTC; 20 points) (
- An EA used deceptive messaging to advance their project; we need mechanisms to avoid deontologically dubious plans by 13 Feb 2024 23:15 UTC; 16 points) (LessWrong;
- 14 Feb 2024 0:00 UTC; 1 point) 's comment on Holly Elmore used deceptive messaging to advance her project; we need mechanisms to avoid deontologically dubious plans by (
I have very little inside perspective on SBF, but my general take on FTX is that there was not enough shady info known outside of the org to stop the fraud. (What’s the mechanism? Unless you knew about the fraud, idk how just saying what you knew could have caused him to change his ways or lose control of his company.) It’s possible EA/rationality might have relied less on SBF if more were known, but you have to consider the harm of a norm of sharing morally-loaded rumors as well.
The risk of a witch hunt environment seems worse to me than the value of giving people tidbits of info that a perfect Bayesian could update on in the correct proportion but which will have negative higher-order effects on any real community that hears it.
But CEA CHT doesn’t cancel people. They just answer questions about people in the most general way possible if you ask and maybe ban them from CEA-sponsored programs and events like EAGs. No coincidence that Julia Wise set it up and is a social worker by training.
(Anyone who knows more, please fee free to correct/elaborate on what CHT does.)
But there are scarce resources and at some point hard decisions really do have to be made. The condemnation of triage is not fair because it dodges the brute reality that you can’t always find a magic third solution that’s positive sum. We have to work on all aspects of problem—creating more options, creating more supply, and how to prioritize when there isn’t enough for everyone.