I upvoted and didn’t disagree vote the original post (and generally agree with you on a bunch of the object level here!); however, I do feel some urge-towards-expressing-disagreement, which is something like:
Less disagreeing with claims; more disagreeing with frames?
Like: I feel the discomfort/disagreement less when you’re talking about what will happen, and more when you’re talking about how people think about warning shots
Your post feels something like … intellectually ungenerous? It’s not trying to look for the strongest version of the warning shots frame, it’s looking for a weak version and critiquing that (but it doesn’t seem very self-aware about that)
This just makes me feel like things are a bit fraught, and it’s trying to push my ontology around, or something, and I don’t quite like it
The title makes me feels especially uneasy in this regard (TBC I don’t think the weak version you’re critiquing is absent from the discourse; but your post reinforces the frame where that’s the core version of the warning shot concept, and I don’t want to reinforce that frame)
At the same time I think the post is making several valuable points! (This makes me sort of wish it felt a little ontologically gentler, which would make it easier to feel straightforwardly good about, and easier to link people to)
Honestly, maybe you should try telling me? Like, just write a paragraph or two on what you think is valuable about the concept / where you would think it’s appropriate to be applying it?
(Not trying to be clever! I started trying to think about what I would write here and mostly ended up thinking “hmm I bet this is stuff Holly would think is obvious”, and to the extent that I may believe you’re missing something, it might be easiest to triangulate by hearing your summary of what the key points in favour are.)
I thought I was giving the strong version. I have never heard an account of a warning shot theory of change that wasn’t “AI will cause a small-scale disaster and then the political will to do something will materialize”. I think the strong version would be my version, educating people first so they can understand small-scale disasters that may occur for what they are. I have never seen or heard this advocated in AI Safety circles before.
And I described how impactful chatGPT was on me, which imo was a warning shot gone right in my case.
Right … so actually I think you’re just doing pretty well at this in the latter part of the article.
But at the start you say things like:
There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest.
What this paragraph seems to do is to push the error-in-beliefs that you’re complaining about down into the very concept of “warning shot”. It seems implicitly to be telling people “hey you may have this concept, but it’s destructive, so please get rid of it”. And I don’t think even you agree with that!
This might instead have been written something like:
People in the AI safety community like to talk about “warning shots”—small disasters that may make it easier for people to wake up to the risks and take appropriate action. There’s a real phenomenon here, and it’s worth thinking about! But the way it’s often talked about is like a fantasy of easy, free support for the AI Safety position—when there’s a small disaster everyone will realize we’ve been right all along and easily do what we suggest.
Actually I think that that opening paragraph was doing more than the title to make me think the post was ontologically ungentle (although they’re reinforcing—like that paragraph shifts the natural way that I read the title).
Did you feel treated ungently for your warning shots take? Or is this just on the behalf of people who might?
Also can you tell me what you mean by “ontologically ungentle”? It sounds worryingly close to a demand that the writer think all the readers are good. I do want to confront people with the fact they’ve been lazily hoping for violence if that’s in fact what they’ve been doing.
By “ontologically ungentle” I mean (roughly) it feels like you’re trying to reach into my mind and tell me that my words/concepts are wrong. As opposed to writing which just tells me that my beliefs are wrong (which might still be epistemically ungentle), or language which just provides evidence without making claims that could be controversial (gentle in this sense, kind of NVC-style).
I do feel a bit of this ungentleness in that opening paragraph towards my own ontology, and I think it put me more on edge reading the rest of the post. But as I said, I didn’t disagree-vote; I was just trying to guess why others might have.
See I feel really jerked around by this audience seemingly needing their ideas or their character or their intent affirmed in every argument. As if Rapoport’s rules are a contract or something and I owe you ego service in exchange for you thinking about helping me stave off AI doom. Don’t you care what’s true? Aren’t you interested in my point, perhaps even my perspective on the way of thinking I reject? EAs didn’t used to say this about my style bc they agreed with me then, but now they have this high-road excuse not to listen to me. I’m now to the point where I think, “why don’t you guys prove yourselves to me?” Prove you care about what happens to people and not just about protecting your discourse.
If you want to turtle up because I was mean to your ontology, whatever. You were obviously never going to help anyway.
Not sure quite what to say here. I think your post was valuable and that’s why I upvoted it. You were expressing confusion about why anyone would disagree, and I was venturing a guess.
I don’t think gentleness = ego service (it’s an absence of violence, not a positive thing). But also I don’t think you owe people gentleness. However, I do think that when you’re not gentle (especially ontologically gentle) you make it harder for people to hear you. Not because of emotional responses getting in the way (though I’m sure that happens sometimes), but literally because there’s more cognitive work for them to do in translating to their perspective. Sometimes you should bite that bullet! But by the same token that you don’t owe people gentleness, they don’t owe you the work to understand what you’re saying.
I was curious about guesses as to why this happens to me lately (a lot of upfront disagree votes and karma hovering around zero until the views are high enough) but getting that answer is still pretty hard for me to hear without being angry.
I’m curious whether you’re closer to angry that someone might read your opening paragraph as saying “you should discard the concept of warning shots” or angry that they might disagree-vote if they read it that way (or something else).
No I’m angry that people feel affronted by me pointing out that normal warning shot discourse entailed hoping for a disaster without feeling much need make sure that would be helpful. They should be glad that they have a chance to catch themselves, but instead they silently downvote.
Just feels like so much of the vibe of this forum is people expecting to be catered to, like their support is some prize, rather than people wanting to find out for themselves how to help the world. A lot of EAs have felt comfortable dismissing PauseAI bc it’s not their vibe or they didn’t feel like the case was made in the right way or they think their friends won’t support it, and it drives me crazy bc aren’t they curious??? Don’t they want to think about how to address AI danger from every angle?
Ok but jtbc that characterization of “affronted” is not the hypothesis I was offering (I don’t want to say it wasn’t a part of the downvoting, but I’d guess a minority).
I would personally kind of like it if people actively explored angles on things more. But man, there are so many things to read on AI these days that I do kind of understand when people haven’t spent time considering things I regard as critical path (maybe I should complain more!), and I honestly find it’s hard to too much fault people for using “did it seem wrong near the start in a way that makes it harder to think” as a heuristic for how deeply to engage with material.
I upvoted and didn’t disagree vote the original post (and generally agree with you on a bunch of the object level here!); however, I do feel some urge-towards-expressing-disagreement, which is something like:
Less disagreeing with claims; more disagreeing with frames?
Like: I feel the discomfort/disagreement less when you’re talking about what will happen, and more when you’re talking about how people think about warning shots
Your post feels something like … intellectually ungenerous? It’s not trying to look for the strongest version of the warning shots frame, it’s looking for a weak version and critiquing that (but it doesn’t seem very self-aware about that)
This just makes me feel like things are a bit fraught, and it’s trying to push my ontology around, or something, and I don’t quite like it
The title makes me feels especially uneasy in this regard (TBC I don’t think the weak version you’re critiquing is absent from the discourse; but your post reinforces the frame where that’s the core version of the warning shot concept, and I don’t want to reinforce that frame)
At the same time I think the post is making several valuable points! (This makes me sort of wish it felt a little ontologically gentler, which would make it easier to feel straightforwardly good about, and easier to link people to)
What is the “strong” version of warning shots thinking?
Honestly, maybe you should try telling me? Like, just write a paragraph or two on what you think is valuable about the concept / where you would think it’s appropriate to be applying it?
(Not trying to be clever! I started trying to think about what I would write here and mostly ended up thinking “hmm I bet this is stuff Holly would think is obvious”, and to the extent that I may believe you’re missing something, it might be easiest to triangulate by hearing your summary of what the key points in favour are.)
I thought I was giving the strong version. I have never heard an account of a warning shot theory of change that wasn’t “AI will cause a small-scale disaster and then the political will to do something will materialize”. I think the strong version would be my version, educating people first so they can understand small-scale disasters that may occur for what they are. I have never seen or heard this advocated in AI Safety circles before.
And I described how impactful chatGPT was on me, which imo was a warning shot gone right in my case.
Right … so actually I think you’re just doing pretty well at this in the latter part of the article.
But at the start you say things like:
What this paragraph seems to do is to push the error-in-beliefs that you’re complaining about down into the very concept of “warning shot”. It seems implicitly to be telling people “hey you may have this concept, but it’s destructive, so please get rid of it”. And I don’t think even you agree with that!
This might instead have been written something like:
Actually I think that that opening paragraph was doing more than the title to make me think the post was ontologically ungentle (although they’re reinforcing—like that paragraph shifts the natural way that I read the title).
Did you feel treated ungently for your warning shots take? Or is this just on the behalf of people who might?
Also can you tell me what you mean by “ontologically ungentle”? It sounds worryingly close to a demand that the writer think all the readers are good. I do want to confront people with the fact they’ve been lazily hoping for violence if that’s in fact what they’ve been doing.
By “ontologically ungentle” I mean (roughly) it feels like you’re trying to reach into my mind and tell me that my words/concepts are wrong. As opposed to writing which just tells me that my beliefs are wrong (which might still be epistemically ungentle), or language which just provides evidence without making claims that could be controversial (gentle in this sense, kind of NVC-style).
I do feel a bit of this ungentleness in that opening paragraph towards my own ontology, and I think it put me more on edge reading the rest of the post. But as I said, I didn’t disagree-vote; I was just trying to guess why others might have.
See I feel really jerked around by this audience seemingly needing their ideas or their character or their intent affirmed in every argument. As if Rapoport’s rules are a contract or something and I owe you ego service in exchange for you thinking about helping me stave off AI doom. Don’t you care what’s true? Aren’t you interested in my point, perhaps even my perspective on the way of thinking I reject? EAs didn’t used to say this about my style bc they agreed with me then, but now they have this high-road excuse not to listen to me. I’m now to the point where I think, “why don’t you guys prove yourselves to me?” Prove you care about what happens to people and not just about protecting your discourse.
If you want to turtle up because I was mean to your ontology, whatever. You were obviously never going to help anyway.
Not sure quite what to say here. I think your post was valuable and that’s why I upvoted it. You were expressing confusion about why anyone would disagree, and I was venturing a guess.
I don’t think gentleness = ego service (it’s an absence of violence, not a positive thing). But also I don’t think you owe people gentleness. However, I do think that when you’re not gentle (especially ontologically gentle) you make it harder for people to hear you. Not because of emotional responses getting in the way (though I’m sure that happens sometimes), but literally because there’s more cognitive work for them to do in translating to their perspective. Sometimes you should bite that bullet! But by the same token that you don’t owe people gentleness, they don’t owe you the work to understand what you’re saying.
I was curious about guesses as to why this happens to me lately (a lot of upfront disagree votes and karma hovering around zero until the views are high enough) but getting that answer is still pretty hard for me to hear without being angry.
I’m curious whether you’re closer to angry that someone might read your opening paragraph as saying “you should discard the concept of warning shots” or angry that they might disagree-vote if they read it that way (or something else).
No I’m angry that people feel affronted by me pointing out that normal warning shot discourse entailed hoping for a disaster without feeling much need make sure that would be helpful. They should be glad that they have a chance to catch themselves, but instead they silently downvote.
Just feels like so much of the vibe of this forum is people expecting to be catered to, like their support is some prize, rather than people wanting to find out for themselves how to help the world. A lot of EAs have felt comfortable dismissing PauseAI bc it’s not their vibe or they didn’t feel like the case was made in the right way or they think their friends won’t support it, and it drives me crazy bc aren’t they curious??? Don’t they want to think about how to address AI danger from every angle?
Ok but jtbc that characterization of “affronted” is not the hypothesis I was offering (I don’t want to say it wasn’t a part of the downvoting, but I’d guess a minority).
I would personally kind of like it if people actively explored angles on things more. But man, there are so many things to read on AI these days that I do kind of understand when people haven’t spent time considering things I regard as critical path (maybe I should complain more!), and I honestly find it’s hard to too much fault people for using “did it seem wrong near the start in a way that makes it harder to think” as a heuristic for how deeply to engage with material.