Good points, I agree with this, trends 1 and 3 seem especially important to me. As you note though the competitive (and safety) reasons for secrecy and research automation probably dominate.
Another thing that current trends in AI progress means though is that it seems (far) less likely that the first AGIs will be brain emulations. This in turn makes it less likely AIs will be moral patients (I think). Which I am inclined to think is good, at least until we are wise and careful enough to create flourishing digital minds.
Two quibbles:
“Given the amount of money invested in the leading companies, investors are likely to want to take great precautions to prevent the theft of their most valuable ideas.” This would be nice, but companies are generally only incentivised to prevent low-resourced actors steal their models. To put in enough effort to make it hard for sophisticated attackers (e.g. governments) to steal the models is a far heavier lift and probably not something AI companies will do of their own accord. (Possibly you already agree with this though.
“The power of transformer-based LLMs was discovered collectively by a number of researchers working at different companies.” I thought it was just Google researchers who invented the Transformer? It is a bit surprising they published it, I suppose they just didn’t realise how transformative it would be, and there was a culture of openness in the AI research community.
thought it was just Google researchers who invented the Transformer?
Google engineers published the first version of a transformer. I don’t think it was in a vacuum, but I don’t know how much they drew from outside sources. Their model was designed for translation, and was somewhat different from Bert and GPT 2. I meant that there were a lot of different people and companies whose work resulted in the form of LLM we see today.
To put in enough effort to make it hard for sophisticated attackers (e.g. governments) to steal the models is a far heavier lift and probably not something AI companies will do of their own accord. (Possibly you already agree with this though.
This is outside my expertise. I imagine techniques are even easier to steal than weights. But if theft is inevitable, I am surprised OpenAI is worth as much as it is.
Good points, I agree with this, trends 1 and 3 seem especially important to me. As you note though the competitive (and safety) reasons for secrecy and research automation probably dominate.
Another thing that current trends in AI progress means though is that it seems (far) less likely that the first AGIs will be brain emulations. This in turn makes it less likely AIs will be moral patients (I think). Which I am inclined to think is good, at least until we are wise and careful enough to create flourishing digital minds.
Two quibbles:
“Given the amount of money invested in the leading companies, investors are likely to want to take great precautions to prevent the theft of their most valuable ideas.” This would be nice, but companies are generally only incentivised to prevent low-resourced actors steal their models. To put in enough effort to make it hard for sophisticated attackers (e.g. governments) to steal the models is a far heavier lift and probably not something AI companies will do of their own accord. (Possibly you already agree with this though.
“The power of transformer-based LLMs was discovered collectively by a number of researchers working at different companies.” I thought it was just Google researchers who invented the Transformer? It is a bit surprising they published it, I suppose they just didn’t realise how transformative it would be, and there was a culture of openness in the AI research community.
Google engineers published the first version of a transformer. I don’t think it was in a vacuum, but I don’t know how much they drew from outside sources. Their model was designed for translation, and was somewhat different from Bert and GPT 2. I meant that there were a lot of different people and companies whose work resulted in the form of LLM we see today.
This is outside my expertise. I imagine techniques are even easier to steal than weights. But if theft is inevitable, I am surprised OpenAI is worth as much as it is.