In general, I tend to disregard anything any tech company adds to their terms of service. People often use lines added to the terms of service to read the tea leaves about a company’s grand strategy, but isn’t it more likely these changes to the TOS get made by some low-level employee in the legal department without the knowledge of the C-suite or other top executives?
And, indeed, The Verge seems to agree with me here (emphasis added):
OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new updates to its usage policy prevent the chatbot from offering legal and medical advice. Karan Singhal, OpenAI’s head of health AI, writes on X that the claims are “not true.” … According to Singhal, the inclusion of policies surrounding legal and medical advice “is not a new change to our terms.” … OpenAI previously had three separate policies, including a “universal” one, as well as ones for ChatGPT and API usage. With the new update, the company has one unified list of rules that its changelog says “reflect a universal set of policies across OpenAI products and services,” but the rules are still the same.
My first thought when reading the line you quoted in the TOS is that was already common sense, and OpenAI’s TOS probably already forbid that, or, if it didn’t, it was a slip-up and they were trying to clarify a position they had already long held. The Verge article seems to confirm that.
Also, I generally don’t buy the story that specialized providers of LLMs can or will provide LLMs that do better in dispensing advice in specialized fields. LLMs don’t specialize. As for the software layer on top the LLM, it’s hard to think anyone will do significantly better than OpenAI, Google, or Anthropic. Even if they could do a bit better in some specialized area like law or medicine, would it be enough to really make a difference? Finally, if we’re talking about anything that involves a significant ratio of manual human effort to LLM queries, that would seem to blow up the whole unit economics of LLMs — which are already challenging enough. (I think it’s very likely we’ll see the popping of an AI financial bubble within 3 years or so, since current valuations are based on expectations of huge growth, and the factors underlying growth, like data and compute scaling, seem to be running out of steam.)
In general, I tend to disregard anything any tech company adds to their terms of service. People often use lines added to the terms of service to read the tea leaves about a company’s grand strategy, but isn’t it more likely these changes to the TOS get made by some low-level employee in the legal department without the knowledge of the C-suite or other top executives?
And, indeed, The Verge seems to agree with me here (emphasis added):
My first thought when reading the line you quoted in the TOS is that was already common sense, and OpenAI’s TOS probably already forbid that, or, if it didn’t, it was a slip-up and they were trying to clarify a position they had already long held. The Verge article seems to confirm that.
Also, I generally don’t buy the story that specialized providers of LLMs can or will provide LLMs that do better in dispensing advice in specialized fields. LLMs don’t specialize. As for the software layer on top the LLM, it’s hard to think anyone will do significantly better than OpenAI, Google, or Anthropic. Even if they could do a bit better in some specialized area like law or medicine, would it be enough to really make a difference? Finally, if we’re talking about anything that involves a significant ratio of manual human effort to LLM queries, that would seem to blow up the whole unit economics of LLMs — which are already challenging enough. (I think it’s very likely we’ll see the popping of an AI financial bubble within 3 years or so, since current valuations are based on expectations of huge growth, and the factors underlying growth, like data and compute scaling, seem to be running out of steam.)