I understand where you’re coming from but I wonder whether this would also have negative consequences. Perhaps it would increase the pace of AI development. It would make LLMs more useful, which might increase investments into AI even more. And maybe it would also make LLMs generally smarter, which could also accelerate AI progress (this is not my area, I’m just speculating). Some EA folks are protesting to pause AI, increased progress might not be great. It would help all the research, but not all research makes the world better. For example, it could benefit research into more efficient animal farming, which could be bad for animals. Considerations like these would make me too unsure about the sign of the impact to eagerly support such a cause, unfortunately.
Indeed, like any technology, we must be vigilant about potential negative consequences. For example, back in the day, we were among those who signed against experiments involving the creation of potential pandemic pathogens—a stance that history has since validated, as we now know all too well. However, I do not view large language models (LLMs) in the same light. I believe LLMs will inevitably become a primary source of information for society, and this can be a very positive development. One way to guide this technology toward beneficial outcomes is by feeding it original scientific sources that have already been published.
Regarding the impact of AI on animal welfare, this is, of course, a critically important topic. We wrote a piece on our position on this some time ago but hadn’t published it until now. Motivated by your comment, we plan to do so in the coming days, and I would appreciate your thoughts on it once it’s available.
I understand where you’re coming from but I wonder whether this would also have negative consequences. Perhaps it would increase the pace of AI development. It would make LLMs more useful, which might increase investments into AI even more. And maybe it would also make LLMs generally smarter, which could also accelerate AI progress (this is not my area, I’m just speculating). Some EA folks are protesting to pause AI, increased progress might not be great. It would help all the research, but not all research makes the world better. For example, it could benefit research into more efficient animal farming, which could be bad for animals. Considerations like these would make me too unsure about the sign of the impact to eagerly support such a cause, unfortunately.
Indeed, like any technology, we must be vigilant about potential negative consequences. For example, back in the day, we were among those who signed against experiments involving the creation of potential pandemic pathogens—a stance that history has since validated, as we now know all too well. However, I do not view large language models (LLMs) in the same light. I believe LLMs will inevitably become a primary source of information for society, and this can be a very positive development. One way to guide this technology toward beneficial outcomes is by feeding it original scientific sources that have already been published.
Regarding the impact of AI on animal welfare, this is, of course, a critically important topic. We wrote a piece on our position on this some time ago but hadn’t published it until now. Motivated by your comment, we plan to do so in the coming days, and I would appreciate your thoughts on it once it’s available.