It’s important to react with an open mind to outside criticism of EA work, and to especially engage with the strong points. Most of the responses posted here so far (including the links to tweets of other researchers) fail to do so.
Yes, the article has a much more accusing tone than content. But, the two main criticisms are actually clear and fairly reasonable, particularly given that OpenAi (as per the article) acknowledges the importance of being respected in the greater machine learning community:
1)Whatever it is that you think about the value of openness in AI research, if you call yourself OpenAI(!) people WILL expect you to be open about your work. Even though the Charta was changed to reflect that, most people will not be aware of this change.
2) I actually agree with the article that much of OpenAI’s press releases feel like exaggerated hype. While I personally agree with the decision itself to not immediately release GPT-2, it was communicated with the air of “it’s too dangerous and powerful to release”. This was met with a strong negative reaction, which is not how you become the trusted authority on AI safety. (see here https://www.reddit.com/r/MachineLearning/comments/aqovhz/discussion_should_i_release_my_mnist_model_or/
We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI.
Note that this sentence does not include “attempt”, or “we hope will scale” .It is hard to read this without coming away with the impression that OpenAI has a very high degree of confidence in being able to build an AGI, and promising so to the world.
On (2), I would note that the ‘hype’ criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind’s claims, and IBM’s (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It’s also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe’s comments etc). This does not necessarily mean that it’s untrue of OpenAI (or that OpenAI are not one of the ‘hypier’), but I think it’s worth noting that this is not unique to OpenAI.
It’s important to react with an open mind to outside criticism of EA work, and to especially engage with the strong points. Most of the responses posted here so far (including the links to tweets of other researchers) fail to do so.
Yes, the article has a much more accusing tone than content. But, the two main criticisms are actually clear and fairly reasonable, particularly given that OpenAi (as per the article) acknowledges the importance of being respected in the greater machine learning community:
1)Whatever it is that you think about the value of openness in AI research, if you call yourself OpenAI(!) people WILL expect you to be open about your work. Even though the Charta was changed to reflect that, most people will not be aware of this change.
2) I actually agree with the article that much of OpenAI’s press releases feel like exaggerated hype. While I personally agree with the decision itself to not immediately release GPT-2, it was communicated with the air of “it’s too dangerous and powerful to release”. This was met with a strong negative reaction, which is not how you become the trusted authority on AI safety. (see here https://www.reddit.com/r/MachineLearning/comments/aqovhz/discussion_should_i_release_my_mnist_model_or/
Another instance that I personally thought was pretty egregious was the announcement of Microsoft’s investment: https://openai.com/blog/microsoft/ :
Note that this sentence does not include “attempt”, or “we hope will scale” .It is hard to read this without coming away with the impression that OpenAI has a very high degree of confidence in being able to build an AGI, and promising so to the world.
On (2), I would note that the ‘hype’ criticism is one that is commonly made about the claims of both a range of individual groups in AI, and about the field as a whole. Criticisms of DeepMind’s claims, and IBM’s (usefulness/impact of IBM Watson in health) come immediately to mind, as well as claims by a range of groups re: deployment of self-driving cars. It’s also a criticism made of the field as a whole (e.g. see various of Gary Marcus, Jack Stilgoe’s comments etc). This does not necessarily mean that it’s untrue of OpenAI (or that OpenAI are not one of the ‘hypier’), but I think it’s worth noting that this is not unique to OpenAI.