I don’t know why (I thought it was a good post) but I have some guesses:
Maybe people don’t like the framing of o1 as “lying” when it’s not clear that it lied. All that’s clear is that o1 gave a false justification, which isn’t necessarily lying.
You tend to write in an alarmist rhetorical style that I think turns off a lot of people. I think you are very much correct to be alarmed about AI x-risk, but also I don’t think it’s a good persuasive strategy (for an EA/rationalist audience) to convey this through emotionally charged rhetoric. I didn’t think this particular post was alarmist, but you have a history of writing alarmist posts/comments, so maybe people downvoted based on the title.
The alarmist rhetoric is kind of intentional. I hope it’s persuasive to at least some people. I’ve been quite frustrated post-GPT-4 over the lack of urgency in EA/LW over AI x-risk (as well as the continued cooperation with AGI accelerationists such as Anthropic). Actually to the point where I think of myself more as an “AI notkilleveryoneist” than an EA these days.
I don’t know why (I thought it was a good post) but I have some guesses:
Maybe people don’t like the framing of o1 as “lying” when it’s not clear that it lied. All that’s clear is that o1 gave a false justification, which isn’t necessarily lying.
You tend to write in an alarmist rhetorical style that I think turns off a lot of people. I think you are very much correct to be alarmed about AI x-risk, but also I don’t think it’s a good persuasive strategy (for an EA/rationalist audience) to convey this through emotionally charged rhetoric. I didn’t think this particular post was alarmist, but you have a history of writing alarmist posts/comments, so maybe people downvoted based on the title.
Thanks. I’m wondering now whether it’s mostly because I’m quoting Shakeel, and there’s been some (mostly unreasonable imo) pushback on his post on X.
The alarmist rhetoric is kind of intentional. I hope it’s persuasive to at least some people. I’ve been quite frustrated post-GPT-4 over the lack of urgency in EA/LW over AI x-risk (as well as the continued cooperation with AGI accelerationists such as Anthropic). Actually to the point where I think of myself more as an “AI notkilleveryoneist” than an EA these days.