ChatGPT Stops Offering Medical and Legal Advice

Image

A recent update to ChatGPT's usage policy has sparked discussion among users, as the model now refuses to provide personalized medical and legal advice. According to reports on Reddit, the AI will no longer analyze medical images like MRIs, X-rays, or photos of skin conditions. Instead, it offers only general educational information and directs users to consult with licensed professionals for diagnoses or specific guidance.

Users have noted that common workarounds, such as framing a request as a “hypothetical situation,” are no longer effective. The updated safety filters consistently prevent the model from offering specific advice, reinforcing the new policy.

Why the Change?

While OpenAI has not officially commented on the change, the move is widely seen as an attempt to mitigate legal risks. The use of AI for sensitive professional advice remains a poorly regulated area, creating potential liability for both the company and its users. This comes as a growing number of people turn to chatbots for complex consultations, with some reports even claiming AI has helped in legal cases.

It’s important to remember that conversations with ChatGPT are not protected by doctor-patient or attorney-client privilege. A court could subpoena conversation records to be used as evidence.
Based on warnings from OpenAI CEO Sam Altman

OpenAI is reportedly working with regulatory bodies to clarify the chatbot's legal status, but for now, the platform is taking a more cautious approach to high-stakes advice.