tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.
=========================
I just wanted to raise something to the community’s attention about the coverage of AI companies within the media. The media-source is ‘The Information’, which is a tech-business focused online news source. Link: https://www.theinformation.com/. I’ll also note that their articles are (to my knowledge) all behind a paywall.
The first article in question is titled “Alphabet Needs to Replace Sundar Pichai”.
It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta’s.
Here’s their mention of Google’s actions throughout GPT-mania:
“The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!).”
This brings us to the second article: “OpenAI’s Hidden Weapon: Ex-Google Engineers”
“As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.
...
After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.
Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said.”
Although there are many concerning themes here, I think the key point is in this last paragraph.
I’ve heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.
I think the articles show that this dynamic is playing out to some degree—Google at least seems to be taking a more risk-averse approach to deploying of AI systems.
The concerning observation is that there has been a two-pronged backlash against Google’s ‘conservative’ approach. Not only is the stockmarket punishing Google for ‘lagging’ behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.
To me this is doubly concerning. The ‘excess caution and layers of red tape’ in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.
Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by financial markets, they’re also forced to weigh up the risks of not being able to retain ML engineers who would rather work for firms with less AI governance measures.
From my limited understand of industry economics, this dynamic makes sense; I recall reading in Michael Porter’s ‘Competitive Advantage’ that lower-ranked firms are more likely to take actions that damage an the overall industry in order to advance their own position in the short-term. In this instance, it means Microsoft are pushing the rate of AI deployment in ways that Google considers to be risky.
Overall, this trend seems to provide another counter-argument to the hypothesis that markets incentives will provide sufficient levels of alignment. There are also concerning implications for AI governance in the global AI ecosystem: in the case that some nations are able to implement effective AI governance policies, will this simply cause a migration of AI talent towards lower-governance zones?
I’d enjoy hearing what other thinking and research has been done on this topic, as it appears to add a new dimension to the already tremendously complex issue of AI safety.
A concerning observation from media coverage of AI industry dynamics
tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts.
=========================
I just wanted to raise something to the community’s attention about the coverage of AI companies within the media. The media-source is ‘The Information’, which is a tech-business focused online news source. Link: https://www.theinformation.com/. I’ll also note that their articles are (to my knowledge) all behind a paywall.
The first article in question is titled “Alphabet Needs to Replace Sundar Pichai”.
It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta’s.
Here’s their mention of Google’s actions throughout GPT-mania:
“The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!).”
This brings us to the second article: “OpenAI’s Hidden Weapon: Ex-Google Engineers”
“As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch.
...
After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent.
Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said.”
Although there are many concerning themes here, I think the key point is in this last paragraph.
I’ve heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices.
I think the articles show that this dynamic is playing out to some degree—Google at least seems to be taking a more risk-averse approach to deploying of AI systems.
The concerning observation is that there has been a two-pronged backlash against Google’s ‘conservative’ approach. Not only is the stockmarket punishing Google for ‘lagging’ behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach.
To me this is doubly concerning. The ‘excess caution and layers of red tape’ in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures.
Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by financial markets, they’re also forced to weigh up the risks of not being able to retain ML engineers who would rather work for firms with less AI governance measures.
From my limited understand of industry economics, this dynamic makes sense; I recall reading in Michael Porter’s ‘Competitive Advantage’ that lower-ranked firms are more likely to take actions that damage an the overall industry in order to advance their own position in the short-term. In this instance, it means Microsoft are pushing the rate of AI deployment in ways that Google considers to be risky.
Overall, this trend seems to provide another counter-argument to the hypothesis that markets incentives will provide sufficient levels of alignment. There are also concerning implications for AI governance in the global AI ecosystem: in the case that some nations are able to implement effective AI governance policies, will this simply cause a migration of AI talent towards lower-governance zones?
I’d enjoy hearing what other thinking and research has been done on this topic, as it appears to add a new dimension to the already tremendously complex issue of AI safety.