Agree with this 100% “I think AGI research is bad. I think starting AGI companies is bad. I think funding AGI companies is bad. I think working at AGI companies is bad. I think nationalizing and subsidizing AGI companies is probably bad. I think AGI racing is bad.”
Thanks, are you arguing that raising AI safety awareness will do more harm than good, through increasing the hype and profile of AI? That’s interesting will have to think about it!
The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
100% agree. I’m sometimes confused on this “evidence based” forum as to why this doesn’t get the front page attention and traction.
At a guess, perhaps some people involved in the forum here are friends, or connected to some of the people involved in these orgs and want to avoid confronting this head on? Or maybe they want to keep good relationships with them so they can still influence them to some degree?
Agree with this 100% “I think AGI research is bad. I think starting AGI companies is bad. I think funding AGI companies is bad. I think working at AGI companies is bad. I think nationalizing and subsidizing AGI companies is probably bad. I think AGI racing is bad.”
Thanks, are you arguing that raising AI safety awareness will do more harm than good, through increasing the hype and profile of AI? That’s interesting will have to think about it!
What do you mean by “the empirical track?”
Thanks!
The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
100% agree. I’m sometimes confused on this “evidence based” forum as to why this doesn’t get the front page attention and traction.
At a guess, perhaps some people involved in the forum here are friends, or connected to some of the people involved in these orgs and want to avoid confronting this head on? Or maybe they want to keep good relationships with them so they can still influence them to some degree?
Should be “empirical track record.” Sorry, fixed.