The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
100% agree. I’m sometimes confused on this “evidence based” forum as to why this doesn’t get the front page attention and traction.
At a guess, perhaps some people involved in the forum here are friends, or connected to some of the people involved in these orgs and want to avoid confronting this head on? Or maybe they want to keep good relationships with them so they can still influence them to some degree?
The empirical track record is that the top 3 AI research labs (Anthropic, DeepMind, and OpenAI) were all started by people worried that AI would be unsafe, who then went on to design and implement a bunch of unsafe AIs.
100% agree. I’m sometimes confused on this “evidence based” forum as to why this doesn’t get the front page attention and traction.
At a guess, perhaps some people involved in the forum here are friends, or connected to some of the people involved in these orgs and want to avoid confronting this head on? Or maybe they want to keep good relationships with them so they can still influence them to some degree?