Probably the latter question is an obvious bias based on my own media consumption, but even when trying my best internet-search efforts, I have a hard time finding anything interesting about GPT-4 (a name people seem to use for a new generation of LLMs following GPT-3). Obviously this is simply a result from openAI not releasing any new information making news useless.
Most of openAIs public affairs with regards to the LLMs they build seems to be focused on GPT-3 series models, in particular fine-tuned ones. That is not directly surprising, as these fine-tuned models are a great source of income for openAI. However, given their past release rate of GPT series (GPT in 2018, GPT-2 in 2019, and GPT-3 in 2020), they seem to take quite some time with their next series (it is almost 2023?). This raises two intuitive thoughts (both of which are probably by far to simple to be even close to reality): Either openAI is somewhat stuck and has a hard time keeping up with its past pace in making “game-changing” progress with their LLM work or openAI has made very extreme progress in the last years and decided to not publicise it for strategic reasons (e.g. to prevent from increasing the “race to AGI”)
[Question] Is there a news-tracker about GPT-4? Why has everything become so silent about it?
Probably the latter question is an obvious bias based on my own media consumption, but even when trying my best internet-search efforts, I have a hard time finding anything interesting about GPT-4 (a name people seem to use for a new generation of LLMs following GPT-3). Obviously this is simply a result from openAI not releasing any new information making news useless.
Most of openAIs public affairs with regards to the LLMs they build seems to be focused on GPT-3 series models, in particular fine-tuned ones. That is not directly surprising, as these fine-tuned models are a great source of income for openAI. However, given their past release rate of GPT series (GPT in 2018, GPT-2 in 2019, and GPT-3 in 2020), they seem to take quite some time with their next series (it is almost 2023?). This raises two intuitive thoughts (both of which are probably by far to simple to be even close to reality): Either openAI is somewhat stuck and has a hard time keeping up with its past pace in making “game-changing” progress with their LLM work or openAI has made very extreme progress in the last years and decided to not publicise it for strategic reasons (e.g. to prevent from increasing the “race to AGI”)
Any thoughts or pointers on that?