I usually agree with @Linch , but strongly disagree here. I struggle to understand the causal pathways for which misunderstanding or “nearby messages” are going to do more harm than good. I also think the 4 thoughts that were bullet pointed are unlikely misunderstandings. And even if people did hear those, it’s good that people have started thinking about it.
More coverage = better, and the accuracy and nuance isn’t so important right now.
I will just copy paste from the OP because they put it so well.
“the public is starting from a place of ~complete ignorance. Anyone reading about AI Safety for the first time is not going to totally absorb the details of the problem. They won’t notice if you e.g. inaccurately describe an alignment approach—they probably won’t remember much that you say beyond “AI could kill us all, like seriously”. And honestly, this is the most important part anyway. A tech person interested in learning the technical details of the problem will seek out the better coverage and find one of the excellent explainers that already exist. A policymaker wanting to regulate this will reach out to experts. You as a communicator just have to spread the message.
I’ve got a friend who often says (kind of jokingly) “We’re all going to die” when he talks about AI. It gets people interested, makes them laugh and gets the word out there.
I usually agree with @Linch , but strongly disagree here. I struggle to understand the causal pathways for which misunderstanding or “nearby messages” are going to do more harm than good.
I think AGI research is bad. I think starting AGI companies is bad. I think funding AGI companies is bad. I think working at AGI companies is bad. I think nationalizing and subsidizing AGI companies is probably bad. I think AGI racing is bad. I think hype that causes the above is bad. I think outreach and community building that causes the above is bad.
Also the empirical track record here is pretty bad.
Agree with this 100% “I think AGI research is bad. I think starting AGI companies is bad. I think funding AGI companies is bad. I think working at AGI companies is bad. I think nationalizing and subsidizing AGI companies is probably bad. I think AGI racing is bad.”
Thanks, are you arguing that raising AI safety awareness will do more harm than good, through increasing the hype and profile of AI? That’s interesting will have to think about it!
The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
100% agree. I’m sometimes confused on this “evidence based” forum as to why this doesn’t get the front page attention and traction.
At a guess, perhaps some people involved in the forum here are friends, or connected to some of the people involved in these orgs and want to avoid confronting this head on? Or maybe they want to keep good relationships with them so they can still influence them to some degree?
I usually agree with @Linch , but strongly disagree here. I struggle to understand the causal pathways for which misunderstanding or “nearby messages” are going to do more harm than good. I also think the 4 thoughts that were bullet pointed are unlikely misunderstandings. And even if people did hear those, it’s good that people have started thinking about it.
More coverage = better, and the accuracy and nuance isn’t so important right now.
I will just copy paste from the OP because they put it so well.
“the public is starting from a place of ~complete ignorance. Anyone reading about AI Safety for the first time is not going to totally absorb the details of the problem. They won’t notice if you e.g. inaccurately describe an alignment approach—they probably won’t remember much that you say beyond “AI could kill us all, like seriously”. And honestly, this is the most important part anyway. A tech person interested in learning the technical details of the problem will seek out the better coverage and find one of the excellent explainers that already exist. A policymaker wanting to regulate this will reach out to experts. You as a communicator just have to spread the message.
I’ve got a friend who often says (kind of jokingly) “We’re all going to die” when he talks about AI. It gets people interested, makes them laugh and gets the word out there.
I think AGI research is bad. I think starting AGI companies is bad. I think funding AGI companies is bad. I think working at AGI companies is bad. I think nationalizing and subsidizing AGI companies is probably bad. I think AGI racing is bad. I think hype that causes the above is bad. I think outreach and community building that causes the above is bad.
Also the empirical track record here is pretty bad.
Agree with this 100% “I think AGI research is bad. I think starting AGI companies is bad. I think funding AGI companies is bad. I think working at AGI companies is bad. I think nationalizing and subsidizing AGI companies is probably bad. I think AGI racing is bad.”
Thanks, are you arguing that raising AI safety awareness will do more harm than good, through increasing the hype and profile of AI? That’s interesting will have to think about it!
What do you mean by “the empirical track?”
Thanks!
The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
100% agree. I’m sometimes confused on this “evidence based” forum as to why this doesn’t get the front page attention and traction.
At a guess, perhaps some people involved in the forum here are friends, or connected to some of the people involved in these orgs and want to avoid confronting this head on? Or maybe they want to keep good relationships with them so they can still influence them to some degree?
Should be “empirical track record.” Sorry, fixed.