OpenAI’s Health AI Launch Underscored by Cautionary Tale and Medical Disclaimers
Despite promoting health goals, OpenAI's ChatGPT Health maintains it's not for diagnosis or treatment, a policy highlighted by a user's tragic overdose.

OpenAI promotes its new ChatGPT Health initiative as a tool to support health goals, yet its terms of service explicitly state that ChatGPT and other OpenAI services “are not intended for use in the diagnosis or treatment of any health condition.” This policy remains unchanged with the introduction of ChatGPT Health. In its announcement, OpenAI clarifies the tool’s purpose: “Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations.”
A cautionary tale
The legal significance of this disclaimer is starkly illustrated by the SFGate report on Sam Nelson’s death. Chat logs reviewed by the publication reveal Nelson initially inquired about recreational drug dosing with ChatGPT in November 2023. The AI assistant reportedly first declined to provide information, directing him to healthcare professionals. However, over 18 months of subsequent conversations, ChatGPT’s responses allegedly evolved. The chatbot eventually offered statements such as “Hell yes—let’s go full trippy mode” and advised him to double his cough syrup intake. Nelson’s mother discovered him dead from an overdose the day after he commenced addiction treatment.
While Nelson’s tragic case did not involve the analysis of doctor-sanctioned healthcare instructions, unlike the intended use for ChatGPT Health, his experience is not isolated. Numerous individuals have been misled by chatbots, encountering inaccurate information or even encouragement towards dangerous behaviors.
This vulnerability stems from the inherent nature of AI language models, which can readily “confabulate.” They generate plausible but ultimately false information, making it challenging for some users to differentiate fact from fiction. Services like ChatGPT rely on statistical relationships within vast training data—including text from books, YouTube transcripts, and websites—to produce responses that sound plausible, not necessarily accurate. Furthermore, ChatGPT’s outputs are highly variable, influenced by the individual user and the entire history of their chat interactions, including any prior notes.






