Yes, I think this is a very useful phenomenon to point at, and some people have a very naïve understanding of what these labs do, especially technical AI safety researchers that have a technical background where skills of critical thinking have not been at the heart of their education. I heard a lot of very candid remarks about the political influence carried out by these labs, and I am worried that these researchers lack a more global understanding of the effects of their work.
Given OpenAI’s recent updates on military bans and transparency of documents, I find myself more and more cautious when it comes to trusting anyone working on AI safety. I would love to see representatives of these labs addressing the concerns raised in this post in a credible way.
Yes, I think this is a very useful phenomenon to point at, and some people have a very naïve understanding of what these labs do, especially technical AI safety researchers that have a technical background where skills of critical thinking have not been at the heart of their education. I heard a lot of very candid remarks about the political influence carried out by these labs, and I am worried that these researchers lack a more global understanding of the effects of their work.
Given OpenAI’s recent updates on military bans and transparency of documents, I find myself more and more cautious when it comes to trusting anyone working on AI safety. I would love to see representatives of these labs addressing the concerns raised in this post in a credible way.