Is there a rationale for a moratorium on large models at this moment instead of some time later? There is not a single mention of GPT-4′s capabilities and why exactly it’s a concern right now in the letter. Most of this article seems to talk about future possibilities for AI, and while I understand they are a concern, what exactly about GPT-4 makes them relevant right now?
The 6 months also seems entirely arbitrary. In any case, I feel like this letter could benefit from some rationale/explanation, maybe even a vague one for the choices of a 6 month moratorium and it happening now of all times.
Also, this article mentions GPT-4 and various AI safety risks, and seems to associate the two, but actually makes no explicit statement on what safety risks models larger than GPT-4 are likely to create. This kind of rhetoric rather disturbs me.
Is there a rationale for a moratorium on large models at this moment instead of some time later? There is not a single mention of GPT-4′s capabilities and why exactly it’s a concern right now in the letter. Most of this article seems to talk about future possibilities for AI, and while I understand they are a concern, what exactly about GPT-4 makes them relevant right now?
The 6 months also seems entirely arbitrary. In any case, I feel like this letter could benefit from some rationale/explanation, maybe even a vague one for the choices of a 6 month moratorium and it happening now of all times.
Also, this article mentions GPT-4 and various AI safety risks, and seems to associate the two, but actually makes no explicit statement on what safety risks models larger than GPT-4 are likely to create. This kind of rhetoric rather disturbs me.