We agree for sure that cost/benefit ought be better articulated when deploying these models (see the What Do We Want section on Cost-Benefit Analysis). The problem here really is the culture of blindly releasing and open-sourcing models like this, using a Go Fast And Break Things mentality, without at least making a case for what the benefits are, what the harms are, and not appealing to any existing standard when making these decisions.
Again, it’s possible (but not our position) that the specifics of DALLE-2 don’t bother you as much, but certainly the current culture we have around such models and their deployment seems an unambiguously alarming development.
The text-to-image models for education + communication here seems like a great idea! Moreover, I think it’s definitely consistent with what we’ve put forth here too, since you could probably fine-tune on graphics contained in papers related to your task at hand. The issue here really is that people are incurring unnecessary amounts of risk by making, say, an automatic Distill-er by using all images on the internet or something like that, when training on a smaller corpora would probably suffice, and vastly reduce the amount of possible risk of a model intended originally for Distill-ing papers. The fundamental position we advance that better protocols are needed before we start mass-deploying these models, and not that NO version of these models / technologies could be beneficial, ever.
We agree for sure that cost/benefit ought be better articulated when deploying these models (see the What Do We Want section on Cost-Benefit Analysis). The problem here really is the culture of blindly releasing and open-sourcing models like this, using a Go Fast And Break Things mentality, without at least making a case for what the benefits are, what the harms are, and not appealing to any existing standard when making these decisions.
Again, it’s possible (but not our position) that the specifics of DALLE-2 don’t bother you as much, but certainly the current culture we have around such models and their deployment seems an unambiguously alarming development.
The text-to-image models for education + communication here seems like a great idea! Moreover, I think it’s definitely consistent with what we’ve put forth here too, since you could probably fine-tune on graphics contained in papers related to your task at hand. The issue here really is that people are incurring unnecessary amounts of risk by making, say, an automatic Distill-er by using all images on the internet or something like that, when training on a smaller corpora would probably suffice, and vastly reduce the amount of possible risk of a model intended originally for Distill-ing papers. The fundamental position we advance that better protocols are needed before we start mass-deploying these models, and not that NO version of these models / technologies could be beneficial, ever.