This analysis roughly aligns with mine and is also why I didn’t go to this protest (but did go to a previous protest organized by Pause AI). This protest seemed to me like it overall communicated pretty deceptively around how OpenAI was handling its military relations, and also I don’t really see any reason to think that engaging with the military increases existential risk very much (at least I don’t see recent changes as an update on OpenAI causing more risk, and wouldn’t see reversing those changes as progress towards reducing existential risk).
I made a mistake—did you think something beyond the mistake was deceptive?
EDITED TO ADD: The accusation of “deception” is extremely hurtful and uncalled for and I really don’t appreciate it from you. I still can’t understand what Mikhail is getting at that was “deceptive” if he wasn’t referring to my mistake. He seems to think it was my responsibility that no one have any false ideas about the situation with OpenAI and the military after reading 2-3 paragraphs about the event and thinks that his misconceptions are what any person would think, so I should have specifically anticipated them.
Yeah, though I don’t think it’s like super egregious. I do think that even after correcting the “charter”-mistake you continued to frame OpenAI usage policies as something that should be treated as some kind of contractual commitment of OpenAI that they walked back.
But that seems backwards to me, a ToS is a commitment by users of OpenAI towards OpenAI, not a commitment by OpenAI to its users (in the vast majority of cases). For example for LessWrong, our ToS includes very few commitments by us, and I definitely don’t see myself as having committed to never changing them. If we have a clause in our ToS that asks users to not make too many API requests in quick succession, then I definitely have not committed to not serve people who nevertheless make that many requests (indeed in many cases like search engines or users asking us for rate-limiting exceptions to build things like greaterwrong.com, I have totally changed how we treat users who make too many requests).
Framing it as having gone back on a commitment seems kind of deceptive to me.
I also think there is something broader that is off about organizing “Pause AI” protests that then advocate for things that seem mostly unrelated to pausing AI to me (and instead lean into other controversial topics). Like, I now have a sense that if I attend future Pause AI events, my attendance of those events will then be seen and used as social proof that OpenAI should give into pressure on some other random controversy (like making contracts with the military), and that feels like it has some deceptive components to it.
And then also at a high-level I feel like there was a rhetorical trick going on in the event messaging where I feel like the protest is organized around some “military bad because weapons bad” affect, without recognizing that the kind of relationship that OpenAI seems to have with the military seems pretty non-central for that kind of relationship (working on cybersecurity stuff, which I think by most people’s lights is quite different).
And then also at a high-level I feel like there was a rhetorical trick going on in the event messaging where I feel like the protest is organized around some “military bad because weapons bad” affect, without recognizing that the kind of relationship that OpenAI seems to have with the military seems pretty non-central for that kind of relationship (working on cybersecurity stuff, which I think by most people’s lights is quite different).
This was not at all intentional, although we were concerned about a future where there was more engagement with the military including with weapons, so I can see someone thinking we were saying they were working with weapons now if they weren’t paying close attention. Working on cybersecurity today is a foot in the door for more involvement with the military in the future, so I don’t think it’s so wrong to fear their involvement today because you don’t want AI weapons.
I do think that even after correcting the “charter”-mistake you continued to frame OpenAI usage policies as something that should be treated as some kind of contractual commitment of OpenAI that they walked back.
I see what you mean here, and I might even have done this a bit because of the conflation of the two documents in my head that didn’t get backpropagated away. I was rushing to correct the mistake and I didn’t really step back to reframe the whole thing holistically.
Yeah, makes sense. It seemed to me you were in a kind of tight spot, having scheduled and framed this specific protest around a thing that you ended up realizing had some important errors in it.
I think it was important to reframe the whole thing more fully when that happened, but man, running protests is hard and requires a kind of courage and defiance that I think is cognitively hard to combine with reframing things like this. I still think it was a mistake, but I also feel sympathetic to how it happened, at least how it played out in my mind (I don’t want to claim I am confident what actually happened, I might still be misunderstanding important components of how things came to pass).
There was honestly no aspect of unwillingness to correct the broader story of the protest. It just didn’t even occur to me that should be done. It seems like you guys don’t believe this, but I didn’t think it being the usage policy instead of the charter made a difference to the small ask of not working with militaries. It made a huge difference in the severity of the accusation toward OpenAI, and what I had sort of retconned myself into thinking was the severity of the hypocrisy/transgression, but either way starting to work with militaries was a good specific development to call attention to and ask them to reverse.
There was definitely language and a framing that was predicated on the idea they were being hypocritical, and if I were thinking more clearly I would have scrubbed that when I realized we were only talking about the usage policy. There are a lot of things I would have changed looking back. Mikhail says he tried to tell me something like this but I found his critiques too confusing (like I thought he was saying mostly that it wasn’t bad to work with the military bc it was cybersecurity, where to me that wasn’t the crux) and so those changes did not occur to me.
I mainly did not realize these things because I was really busy with logistics, not because I needed to be in soldier mindset to do the protest. (EDIT: I mean, maybe some soldier mindset is required and takes a toll, but I don’t think it would have been an issue here. If someone had presented me with a press release with all the revisions I mentioned above to send out as a correction instead of the one I sent out, I would have thought it was better and sent it instead. The problem was more that I panicked and wanted to correct the mistake immediately and wasn’t thinking of other things that should be corrected because of it.) Mikhail may have felt I was being soldier-y bc I wouldn’t spend more time trying to figure out what he was talking about, but that had more to do with me thinking I had basically understood his point (which I took to be basically that I wasn’t including enough details in the promotional materials so people wouldn’t have a picture he considered accurate enough) and just disagreed with it (I thought space was limited and many rationalists do not appreciate the cost of extra words and thoughts in advocacy communication).
I thought he was saying mostly that it wasn’t bad to work with the military bc it was cybersecurity, where to me that wasn’t the crux
? It is clearly not what I was telling you. At the end of January, I told you pretty directly that for people who are not aware of the context what you wrote might be misleading, because you omitted crucial details. It’s not about how good or bad what OpenAI are doing is. It’s about people not having important details of the story to judge for themselves.
Mikhail may have felt I was being soldier-y bc I wouldn’t spend more time trying to figure out what he was talking about, but that had more to do with me thinking I had basically understood his point (which I took to be basically that I wasn’t including enough details in the promotional materials so people wouldn’t have a picture he considered accurate enough) and just disagreed with it.
I’m not sure where it’s coming from. I suggest you look at the messages we exchanged around January 31 and double-check you’re not misleading people here.
It seems to me that you are not considering the possibility that you may in fact not have said this clearly, and that this was a misunderstanding that you could have prevented by communicating another way.
I don’t think the miscommunication can be blamed on any one party specifically. Both could have made different actions to reduce the risk of misunderstanding. I find it reasonable for both of them to think they had more important stuff to do than spend 10x time reducing the risk of misunderstanding and think the responsibility is on the other person.
To give my two cents on this, each time I talked with Mikhail he had really good points on lots of topics, and the conversations helped me improve my models a lot. However, I do have a harder time understanding Mikhail than understanding the average person, and definitely feel the need to put in lots of work to get his points. In particular, his statements tend to feel a lot like attacks (like saying you’re deceptive), and it’s straining to decouple and not get defensive to just consider the factual point he’s making.
EDIT: looking at the further replies from Holly and looking back at the messages we exchanged, I’m no longer certain it was miscommunication and not something intentional. As I said elsewhere, I’d be happy for the messages to be shared with a third party. (Please ignore the part about certainty in the original comment below.)
There’s certainly was miscommunication surrounding the draft of this post, but I don’t believe they didn’t understand people can be misled back at the end of January.
You said a lot of things to me, not all of which I remember, but the above were two of them. I knew I didn’t get everything you wanted me to get about what you were saying, but I felt that I understood enough to know what the cruxes were and where I stood on them.
I told you pretty directly that for people who are not aware of the context what you wrote might be misleading, because you omitted crucial details
I said:
which I took to be basically that I wasn’t including enough details in the promotional materials so people wouldn’t have a picture he considered accurate enough
I wouldn’t care if people knew some number to some approximation and not fully. This is quite different from saying something that’s technically not false but creates a misleading impression you thought was more likely to get people to support your message.
I don’t want to be spending time this way and would be happy if you found someone we’d both be happy with them reading our message exchange and figuring out how deceptive or not was the protest messaging.
(Edit- I no longer believe the above is a true recount of what happened and retract the below.)
Good to see this. I’m sad that you didn’t think that after reading “Even the best-intending people are not perfect, often have some inertia, and aren’t able to make sure the messaging they put out isn’t misleading and fully propagate updates” in the post. As I mentioned before, it could’ve been simply a postmortem.
I found that sentence unclear. It’s poorly written and I did not know what you meant by it. In context you were not saying I had good intentions— you declined to remove the “deceptive” language earlier because you thought I could have been deceptive.
This analysis roughly aligns with mine and is also why I didn’t go to this protest (but did go to a previous protest organized by Pause AI). This protest seemed to me like it overall communicated pretty deceptively around how OpenAI was handling its military relations, and also I don’t really see any reason to think that engaging with the military increases existential risk very much (at least I don’t see recent changes as an update on OpenAI causing more risk, and wouldn’t see reversing those changes as progress towards reducing existential risk).
I made a mistake—did you think something beyond the mistake was deceptive?
EDITED TO ADD: The accusation of “deception” is extremely hurtful and uncalled for and I really don’t appreciate it from you. I still can’t understand what Mikhail is getting at that was “deceptive” if he wasn’t referring to my mistake. He seems to think it was my responsibility that no one have any false ideas about the situation with OpenAI and the military after reading 2-3 paragraphs about the event and thinks that his misconceptions are what any person would think, so I should have specifically anticipated them.
Yeah, though I don’t think it’s like super egregious. I do think that even after correcting the “charter”-mistake you continued to frame OpenAI usage policies as something that should be treated as some kind of contractual commitment of OpenAI that they walked back.
But that seems backwards to me, a ToS is a commitment by users of OpenAI towards OpenAI, not a commitment by OpenAI to its users (in the vast majority of cases). For example for LessWrong, our ToS includes very few commitments by us, and I definitely don’t see myself as having committed to never changing them. If we have a clause in our ToS that asks users to not make too many API requests in quick succession, then I definitely have not committed to not serve people who nevertheless make that many requests (indeed in many cases like search engines or users asking us for rate-limiting exceptions to build things like greaterwrong.com, I have totally changed how we treat users who make too many requests).
Framing it as having gone back on a commitment seems kind of deceptive to me.
I also think there is something broader that is off about organizing “Pause AI” protests that then advocate for things that seem mostly unrelated to pausing AI to me (and instead lean into other controversial topics). Like, I now have a sense that if I attend future Pause AI events, my attendance of those events will then be seen and used as social proof that OpenAI should give into pressure on some other random controversy (like making contracts with the military), and that feels like it has some deceptive components to it.
And then also at a high-level I feel like there was a rhetorical trick going on in the event messaging where I feel like the protest is organized around some “military bad because weapons bad” affect, without recognizing that the kind of relationship that OpenAI seems to have with the military seems pretty non-central for that kind of relationship (working on cybersecurity stuff, which I think by most people’s lights is quite different).
(I also roughly agree with Jason’s analysis here)
This was not at all intentional, although we were concerned about a future where there was more engagement with the military including with weapons, so I can see someone thinking we were saying they were working with weapons now if they weren’t paying close attention. Working on cybersecurity today is a foot in the door for more involvement with the military in the future, so I don’t think it’s so wrong to fear their involvement today because you don’t want AI weapons.
I see what you mean here, and I might even have done this a bit because of the conflation of the two documents in my head that didn’t get backpropagated away. I was rushing to correct the mistake and I didn’t really step back to reframe the whole thing holistically.
Yeah, makes sense. It seemed to me you were in a kind of tight spot, having scheduled and framed this specific protest around a thing that you ended up realizing had some important errors in it.
I think it was important to reframe the whole thing more fully when that happened, but man, running protests is hard and requires a kind of courage and defiance that I think is cognitively hard to combine with reframing things like this. I still think it was a mistake, but I also feel sympathetic to how it happened, at least how it played out in my mind (I don’t want to claim I am confident what actually happened, I might still be misunderstanding important components of how things came to pass).
There was honestly no aspect of unwillingness to correct the broader story of the protest. It just didn’t even occur to me that should be done. It seems like you guys don’t believe this, but I didn’t think it being the usage policy instead of the charter made a difference to the small ask of not working with militaries. It made a huge difference in the severity of the accusation toward OpenAI, and what I had sort of retconned myself into thinking was the severity of the hypocrisy/transgression, but either way starting to work with militaries was a good specific development to call attention to and ask them to reverse.
There was definitely language and a framing that was predicated on the idea they were being hypocritical, and if I were thinking more clearly I would have scrubbed that when I realized we were only talking about the usage policy. There are a lot of things I would have changed looking back. Mikhail says he tried to tell me something like this but I found his critiques too confusing (like I thought he was saying mostly that it wasn’t bad to work with the military bc it was cybersecurity, where to me that wasn’t the crux) and so those changes did not occur to me.
I mainly did not realize these things because I was really busy with logistics, not because I needed to be in soldier mindset to do the protest. (EDIT: I mean, maybe some soldier mindset is required and takes a toll, but I don’t think it would have been an issue here. If someone had presented me with a press release with all the revisions I mentioned above to send out as a correction instead of the one I sent out, I would have thought it was better and sent it instead. The problem was more that I panicked and wanted to correct the mistake immediately and wasn’t thinking of other things that should be corrected because of it.) Mikhail may have felt I was being soldier-y bc I wouldn’t spend more time trying to figure out what he was talking about, but that had more to do with me thinking I had basically understood his point (which I took to be basically that I wasn’t including enough details in the promotional materials so people wouldn’t have a picture he considered accurate enough) and just disagreed with it (I thought space was limited and many rationalists do not appreciate the cost of extra words and thoughts in advocacy communication).
? It is clearly not what I was telling you. At the end of January, I told you pretty directly that for people who are not aware of the context what you wrote might be misleading, because you omitted crucial details. It’s not about how good or bad what OpenAI are doing is. It’s about people not having important details of the story to judge for themselves.
I’m not sure where it’s coming from. I suggest you look at the messages we exchanged around January 31 and double-check you’re not misleading people here.
It seems to me that you are not considering the possibility that you may in fact not have said this clearly, and that this was a misunderstanding that you could have prevented by communicating another way.
I don’t think the miscommunication can be blamed on any one party specifically. Both could have made different actions to reduce the risk of misunderstanding. I find it reasonable for both of them to think they had more important stuff to do than spend 10x time reducing the risk of misunderstanding and think the responsibility is on the other person.
To give my two cents on this, each time I talked with Mikhail he had really good points on lots of topics, and the conversations helped me improve my models a lot.
However, I do have a harder time understanding Mikhail than understanding the average person, and definitely feel the need to put in lots of work to get his points. In particular, his statements tend to feel a lot like attacks (like saying you’re deceptive), and it’s straining to decouple and not get defensive to just consider the factual point he’s making.
EDIT: looking at the further replies from Holly and looking back at the messages we exchanged, I’m no longer certain it was miscommunication and not something intentional. As I said elsewhere, I’d be happy for the messages to be shared with a third party. (Please ignore the part about certainty in the original comment below.)
There’s certainly was miscommunication surrounding the draft of this post, but I don’t believe they didn’t understand people can be misled back at the end of January.
You said a lot of things to me, not all of which I remember, but the above were two of them. I knew I didn’t get everything you wanted me to get about what you were saying, but I felt that I understood enough to know what the cruxes were and where I stood on them.
You said:
I said:
Are these not the same thing?
I wouldn’t care if people knew some number to some approximation and not fully. This is quite different from saying something that’s technically not false but creates a misleading impression you thought was more likely to get people to support your message.
I don’t want to be spending time this way and would be happy if you found someone we’d both be happy with them reading our message exchange and figuring out how deceptive or not was the protest messaging.
(Edit- I no longer believe the above is a true recount of what happened and retract the below.)
Good to see this. I’m sad that you didn’t think that after reading “Even the best-intending people are not perfect, often have some inertia, and aren’t able to make sure the messaging they put out isn’t misleading and fully propagate updates” in the post. As I mentioned before, it could’ve been simply a postmortem.
I found that sentence unclear. It’s poorly written and I did not know what you meant by it. In context you were not saying I had good intentions— you declined to remove the “deceptive” language earlier because you thought I could have been deceptive.