Seems to me that we’ll only see a change in course from relentless profit-seeking LLM development if intermediate AIs start misbehaving—smart enough to seek power and fight against control, but dumb enough to be caught and switched off.
I think instead of a boycott, this is a time to practice empathic communication with the public now that the tech is on everybody’s radar and AI x-risk arguments are getting a respectability boost from folks like Ezra Klein.
A poster on LessWrong recently harvested a comment from a NY Times reader that talked about x-risk in a way that clearly resonated with the readership. Figuring out how to scale that up seems like a good task for an LLM. In this theory of change, we need to double down on our communication skills to steer the conversation in appropriate ways. And we’ll need LLMs to help us do that. A boycott takes us out of the conversation, so I don’t think that’s the right play.
One thing, I don’t understand how a boycott of one paid AI takes us out of the conversation. Why do we need the LLMs t help us double down on communication?
Do you mean we need to show people the LLMs dodgy mistakes to help our argument?
IMO, the main potential power of a boycott is symbolic, and I think you only achieve that is by eschewing LLMs entirely. Instead, we can use them to communicate, plan, and produce examples. As I see it, this needs to be a story about engaged and thoughtful users advocating for real responsibility with potentially dangerous tech, not panicky luddites mounting a weak looking protest.
Seems to me that we’ll only see a change in course from relentless profit-seeking LLM development if intermediate AIs start misbehaving—smart enough to seek power and fight against control, but dumb enough to be caught and switched off.
I think instead of a boycott, this is a time to practice empathic communication with the public now that the tech is on everybody’s radar and AI x-risk arguments are getting a respectability boost from folks like Ezra Klein.
A poster on LessWrong recently harvested a comment from a NY Times reader that talked about x-risk in a way that clearly resonated with the readership. Figuring out how to scale that up seems like a good task for an LLM. In this theory of change, we need to double down on our communication skills to steer the conversation in appropriate ways. And we’ll need LLMs to help us do that. A boycott takes us out of the conversation, so I don’t think that’s the right play.
I love this thanks!
One thing, I don’t understand how a boycott of one paid AI takes us out of the conversation. Why do we need the LLMs t help us double down on communication?
Do you mean we need to show people the LLMs dodgy mistakes to help our argument?
IMO, the main potential power of a boycott is symbolic, and I think you only achieve that is by eschewing LLMs entirely. Instead, we can use them to communicate, plan, and produce examples. As I see it, this needs to be a story about engaged and thoughtful users advocating for real responsibility with potentially dangerous tech, not panicky luddites mounting a weak looking protest.
Gotcha thanks that makes sense.