I don’t know why (I thought it was a good post) but I have some guesses:
Maybe people don’t like the framing of o1 as “lying” when it’s not clear that it lied. All that’s clear is that o1 gave a false justification, which isn’t necessarily lying.
You tend to write in an alarmist rhetorical style that I think turns off a lot of people. I think you are very much correct to be alarmed about AI x-risk, but also I don’t think it’s a good persuasive strategy (for an EA/rationalist audience) to convey this through emotionally charged rhetoric. I didn’t think this particular post was alarmist, but you have a history of writing alarmist posts/comments, so maybe people downvoted based on the title.
The alarmist rhetoric is kind of intentional. I hope it’s persuasive to at least some people. I’ve been quite frustrated post-GPT-4 over the lack of urgency in EA/LW over AI x-risk (as well as the continued cooperation with AGI accelerationists such as Anthropic). Actually to the point where I think of myself more as an “AI notkilleveryoneist” than an EA these days.
I didn’t downvote, but I’ll offer an alternative reading of the experiment discussed here some people unimpressed by the arguments may agree with, especially if they’ve read the paper.
Some LLMs were given some roleplaying prompts to solve a problem intentionally constructed to make it impossible for them to behave in an “aligned” manner, where arguably “subversion” was the more ethical course of action.
In this role-playing context, the models very occasionally generated responses which solved a problem broadly of the form “you need to do y, but people who want to stop you from doing y are watching you on camera. On their camera is a switch”. On some of the occasions they solved that initial problem and were greeted by a followup role play interrogating them about their subterfuge, they sometimes chose denial (and sometimes output chain of thought arguments for why denial was better than confession), and on a small fraction of those denials actually successfully continued that chain of thought to provide false answers [coherent or otherwise] for multiple consecutive followups.
To put it mildly, this doesn’t sound like “scheming” or converging on self preservation as an instrumental goal, it sounds like when a model is prompted that it’s role-playing an entity which must prioritise something over subsequent instructions, it occasionally finds the deliberately-constructed only valid route to achieving that priority, which in this case happens to have been one involving self preservation intended to trigger AI doomers. Especially considering how many prompts they burned through to generate the scary sounding answers and how many times the AI generated responses inconsistent with self-preservation.
The models shouldn’t be given control of important stuff because their responses are unpredictable and a bit stupid, not because they’re exhibiting signs of emergent agency.
Its not role playing if the AI behaves completely differently except when the instruction set is specifically set up so the AI parses it as role playing, in this case with other glaringly obviously unrealistic scene-setting like sending it a config file with it’s own monitoring settings along with a logic puzzle involving a concealed goal. I can get a bucket of water to conduct a sneak attack on me if I put enough effort into setting it up to fall on my head, but that doesn’t mean I need to worry about being ambushed by water on a regular basis!
Why is this being downvoted!?
I don’t know why (I thought it was a good post) but I have some guesses:
Maybe people don’t like the framing of o1 as “lying” when it’s not clear that it lied. All that’s clear is that o1 gave a false justification, which isn’t necessarily lying.
You tend to write in an alarmist rhetorical style that I think turns off a lot of people. I think you are very much correct to be alarmed about AI x-risk, but also I don’t think it’s a good persuasive strategy (for an EA/rationalist audience) to convey this through emotionally charged rhetoric. I didn’t think this particular post was alarmist, but you have a history of writing alarmist posts/comments, so maybe people downvoted based on the title.
Thanks. I’m wondering now whether it’s mostly because I’m quoting Shakeel, and there’s been some (mostly unreasonable imo) pushback on his post on X.
The alarmist rhetoric is kind of intentional. I hope it’s persuasive to at least some people. I’ve been quite frustrated post-GPT-4 over the lack of urgency in EA/LW over AI x-risk (as well as the continued cooperation with AGI accelerationists such as Anthropic). Actually to the point where I think of myself more as an “AI notkilleveryoneist” than an EA these days.
I didn’t downvote, but I’ll offer an alternative reading of the experiment discussed here some people unimpressed by the arguments may agree with, especially if they’ve read the paper.
Some LLMs were given some roleplaying prompts to solve a problem intentionally constructed to make it impossible for them to behave in an “aligned” manner, where arguably “subversion” was the more ethical course of action.
In this role-playing context, the models very occasionally generated responses which solved a problem broadly of the form “you need to do y, but people who want to stop you from doing y are watching you on camera. On their camera is a switch”. On some of the occasions they solved that initial problem and were greeted by a followup role play interrogating them about their subterfuge, they sometimes chose denial (and sometimes output chain of thought arguments for why denial was better than confession), and on a small fraction of those denials actually successfully continued that chain of thought to provide false answers [coherent or otherwise] for multiple consecutive followups.
To put it mildly, this doesn’t sound like “scheming” or converging on self preservation as an instrumental goal, it sounds like when a model is prompted that it’s role-playing an entity which must prioritise something over subsequent instructions, it occasionally finds the deliberately-constructed only valid route to achieving that priority, which in this case happens to have been one involving self preservation intended to trigger AI doomers. Especially considering how many prompts they burned through to generate the scary sounding answers and how many times the AI generated responses inconsistent with self-preservation.
The models shouldn’t be given control of important stuff because their responses are unpredictable and a bit stupid, not because they’re exhibiting signs of emergent agency.
Re “role-playing”, that is moot when it’s the end result that matters—what actions the AI takes in the world. See also: Frontier AI systems have surpassed the self-replicating red line.
Its not role playing if the AI behaves completely differently except when the instruction set is specifically set up so the AI parses it as role playing, in this case with other glaringly obviously unrealistic scene-setting like sending it a config file with it’s own monitoring settings along with a logic puzzle involving a concealed goal. I can get a bucket of water to conduct a sneak attack on me if I put enough effort into setting it up to fall on my head, but that doesn’t mean I need to worry about being ambushed by water on a regular basis!