Yeah, I think it’s quite likely that these reporters are basing their opinions on others’ opinions, rather than what people from the EA and AI safety communities are saying. I wonder if some search engine optimization could help with this?
This part was also interesting & frustrating:
“Regulations targeting these frontier models would create unique hurdles and costs specifically for companies that already have vast resources, like OpenAI and Anthropic, thus giving an advantage to less-resourced start-ups and independent researchers who need not be subject to such requirements (because they are building less dangerous systems),” Levine wrote (emphasis original).
I think this is a really good point that needs to be signal boosted more (and obviously policy needs to actually do this). But then it immediately follows with this refutation without any arguments 😭:
Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. Venkatasubramanian said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.
Yeah, I think it’s quite likely that these reporters are basing their opinions on others’ opinions, rather than what people from the EA and AI safety communities are saying. I wonder if some search engine optimization could help with this?
This part was also interesting & frustrating:
I think this is a really good point that needs to be signal boosted more (and obviously policy needs to actually do this). But then it immediately follows with this refutation without any arguments 😭: