ChatGPT’s usage terms now forbid it from giving legal and medical advice:
So you cannot use our services for: provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional
(https://openai.com/en-GB/policies/usage-policies/)
Some users are reporting that ChatGPT refuses to give certain kinds of medical advice. I can’t figure out if this also applies to API usage.
It sounds like the regulatory threats and negative press may be working, and it’ll be interesting to see if other model providers follow suit. It will be interesting to see if jurisdictions formally regulate this (I can see the EU doing so, but not the U.S.).
In my opinion, the upshot of this is probably that OpenAI are ceding this market to specialised providers who can afford the higher marginal costs of moderation, safety, and regulatory compliance (or black-market-style providers who refuse to put these safeguards on and don’t bow to regulatory pressure). This is probably a good thing—the legal, medical, and financial industries have clearer, industry-specific regulatory frameworks that can more adequately monitor for and prevent harm.
In general, I tend to disregard anything any tech company adds to their terms of service. People often use lines added to the terms of service to read the tea leaves about a company’s grand strategy, but isn’t it more likely these changes to the TOS get made by some low-level employee in the legal department without the knowledge of the C-suite or other top executives?
And, indeed, The Verge seems to agree with me here (emphasis added):
OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new updates to its usage policy prevent the chatbot from offering legal and medical advice. Karan Singhal, OpenAI’s head of health AI, writes on X that the claims are “not true.” … According to Singhal, the inclusion of policies surrounding legal and medical advice “is not a new change to our terms.” … OpenAI previously had three separate policies, including a “universal” one, as well as ones for ChatGPT and API usage. With the new update, the company has one unified list of rules that its changelog says “reflect a universal set of policies across OpenAI products and services,” but the rules are still the same.
My first thought when reading the line you quoted in the TOS is that was already common sense, and OpenAI’s TOS probably already forbid that, or, if it didn’t, it was a slip-up and they were trying to clarify a position they had already long held. The Verge article seems to confirm that.
Also, I generally don’t buy the story that specialized providers of LLMs can or will provide LLMs that do better in dispensing advice in specialized fields. LLMs don’t specialize. As for the software layer on top the LLM, it’s hard to think anyone will do significantly better than OpenAI, Google, or Anthropic. Even if they could do a bit better in some specialized area like law or medicine, would it be enough to really make a difference? Finally, if we’re talking about anything that involves a significant ratio of manual human effort to LLM queries, that would seem to blow up the whole unit economics of LLMs — which are already challenging enough. (I think it’s very likely we’ll see the popping of an AI financial bubble within 3 years or so, since current valuations are based on expectations of huge growth, and the factors underlying growth, like data and compute scaling, seem to be running out of steam.)
In my experience, you get better advice anyway if you frame the question as though you are a professional. So instead of, “here is a picture of my rash, what do you think?”, you say, “A patient has provided this picture of a rash, what is your diagnosis?”.
If OpenAI are sincere in adding this to their ToS or there’s further regulatory pressure, the models will presumably get better at preventing this. I think it’s important.
ChatGPT’s usage terms now forbid it from giving legal and medical advice:
Some users are reporting that ChatGPT refuses to give certain kinds of medical advice. I can’t figure out if this also applies to API usage.
It sounds like the regulatory threats and negative press may be working, and it’ll be interesting to see if other model providers follow suit. It will be interesting to see if jurisdictions formally regulate this (I can see the EU doing so, but not the U.S.).
In my opinion, the upshot of this is probably that OpenAI are ceding this market to specialised providers who can afford the higher marginal costs of moderation, safety, and regulatory compliance (or black-market-style providers who refuse to put these safeguards on and don’t bow to regulatory pressure). This is probably a good thing—the legal, medical, and financial industries have clearer, industry-specific regulatory frameworks that can more adequately monitor for and prevent harm.
In general, I tend to disregard anything any tech company adds to their terms of service. People often use lines added to the terms of service to read the tea leaves about a company’s grand strategy, but isn’t it more likely these changes to the TOS get made by some low-level employee in the legal department without the knowledge of the C-suite or other top executives?
And, indeed, The Verge seems to agree with me here (emphasis added):
My first thought when reading the line you quoted in the TOS is that was already common sense, and OpenAI’s TOS probably already forbid that, or, if it didn’t, it was a slip-up and they were trying to clarify a position they had already long held. The Verge article seems to confirm that.
Also, I generally don’t buy the story that specialized providers of LLMs can or will provide LLMs that do better in dispensing advice in specialized fields. LLMs don’t specialize. As for the software layer on top the LLM, it’s hard to think anyone will do significantly better than OpenAI, Google, or Anthropic. Even if they could do a bit better in some specialized area like law or medicine, would it be enough to really make a difference? Finally, if we’re talking about anything that involves a significant ratio of manual human effort to LLM queries, that would seem to blow up the whole unit economics of LLMs — which are already challenging enough. (I think it’s very likely we’ll see the popping of an AI financial bubble within 3 years or so, since current valuations are based on expectations of huge growth, and the factors underlying growth, like data and compute scaling, seem to be running out of steam.)
In my experience, you get better advice anyway if you frame the question as though you are a professional. So instead of, “here is a picture of my rash, what do you think?”, you say, “A patient has provided this picture of a rash, what is your diagnosis?”.
If OpenAI are sincere in adding this to their ToS or there’s further regulatory pressure, the models will presumably get better at preventing this. I think it’s important.