Thanks for the elaborate answer. I’d be curious to hear a bit more of your thoughts regarding the meta-comment in your last paragraph + hints what to change about the information environment you suspect here, if you have time
note: feel free to be as unrigorous as you want with the response, you don’t need to justify beliefs, just flesh them out a bit, I don’t intend to contest them but want to use them to improve my understanding of that situation
Ok, writing quickly. Starting on the “object level about the beliefs”:
It seems like sentiment or buzz, like the tweets about GPT-4 mentioned in the other comment can be found. That gives a different view than silence mentioned in your post. It seems it could be found by searching twitter or other social media.
It seems like the content in my comment (e.g. I’ve suggested that there are various projects that OpenAI has under way and these compete for attention/PR effort) is sort of publicly apparent, low hanging speculation.
Let’s say that OpenAI was actually unusually silent on GPT models and let’s also accept many views of AI safety in EA. It’s unclear why P(very extreme progress, more than 1 year with no OpenAI GPT release) would be large given some sort of extreme progress.
In the most terrible scenarios, we would be experiencing some sort of hostile situation already.
In other scenarios, OpenAI would aptly “simulate” a normally functioning OpenAI, e.g. releasing an incremental new model.
In my view P(very extreme progress | more than 1 year with no new GPT release) is low because many other underlying states of the world would produce silence that is unrelated to some extreme breakthrough, e.g. pivot, management change, general dysfunction, etc.
It seems like it’s a pretty specific conjunction of beliefs where there is some sort of extreme development and:
OpenAI is sort of sitting around, looking at their baby AI, but not producing any projects or other work.
They are stunned into silence and this disrupts their PR
This delay is lasting years
No one has leaked anything about it
I tried to stay sort of “object level” above, because the above sort of lays out the base level reasoning which you can find flaws in or contrast to my own.
From a meta sense, it’s not just a specific conjunction of beliefs, but it’s very hard to disprove a scenario where people are orchestrating silence about something critical—generally this seems like a situation where I want to make sure I have considered other viewpoints.
Thanks for the elaborate answer. I’d be curious to hear a bit more of your thoughts regarding the meta-comment in your last paragraph + hints what to change about the information environment you suspect here, if you have time
note: feel free to be as unrigorous as you want with the response, you don’t need to justify beliefs, just flesh them out a bit, I don’t intend to contest them but want to use them to improve my understanding of that situation
Ok, writing quickly. Starting on the “object level about the beliefs”:
It seems like sentiment or buzz, like the tweets about GPT-4 mentioned in the other comment can be found. That gives a different view than silence mentioned in your post. It seems it could be found by searching twitter or other social media.
It seems like the content in my comment (e.g. I’ve suggested that there are various projects that OpenAI has under way and these compete for attention/PR effort) is sort of publicly apparent, low hanging speculation.
Let’s say that OpenAI was actually unusually silent on GPT models and let’s also accept many views of AI safety in EA. It’s unclear why P(very extreme progress, more than 1 year with no OpenAI GPT release) would be large given some sort of extreme progress.
In the most terrible scenarios, we would be experiencing some sort of hostile situation already.
In other scenarios, OpenAI would aptly “simulate” a normally functioning OpenAI, e.g. releasing an incremental new model.
In my view P(very extreme progress | more than 1 year with no new GPT release) is low because many other underlying states of the world would produce silence that is unrelated to some extreme breakthrough, e.g. pivot, management change, general dysfunction, etc.
It seems like it’s a pretty specific conjunction of beliefs where there is some sort of extreme development and:
OpenAI is sort of sitting around, looking at their baby AI, but not producing any projects or other work.
They are stunned into silence and this disrupts their PR
This delay is lasting years
No one has leaked anything about it
I tried to stay sort of “object level” above, because the above sort of lays out the base level reasoning which you can find flaws in or contrast to my own.
From a meta sense, it’s not just a specific conjunction of beliefs, but it’s very hard to disprove a scenario where people are orchestrating silence about something critical—generally this seems like a situation where I want to make sure I have considered other viewpoints.