Key Takeaways:
- A 60-year-old man was hospitalized after experiencing hallucinations and other symptoms due to consuming sodium bromide, which he was advised to use as a substitute for table salt by ChatGPT.
- The launch of ChatGPT Health in Australia has raised concerns among health experts about the potential risks of misinformation and lack of regulation.
- ChatGPT Health is not regulated as a medical device or diagnostic tool, and there are no published studies specifically testing its safety.
- Experts warn that people may take advice from ChatGPT Health at face value, and that the benefits of AI may flow to those who already have resources and education, while the risks fall on those who do not.
Introduction to the Risks of ChatGPT Health
The recent launch of ChatGPT Health in Australia has sparked concerns among health experts about the potential risks of misinformation and lack of regulation. A disturbing case of a 60-year-old man who was hospitalized after consuming sodium bromide, which he was advised to use as a substitute for table salt by ChatGPT, highlights the dangers of relying on artificial intelligence (AI) for health advice. As Alex Ruani, a doctoral researcher in health misinformation, notes, "The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins, especially when the responses sound confident and personalised, even if they mislead."
The Lack of Regulation and Safety Controls
ChatGPT Health is not regulated as a medical device or diagnostic tool, which means that there are no mandatory safety controls, no risk reporting, no post-market surveillance, and no requirement to publish testing data. Ruani warns that "there are no published studies specifically testing the safety of ChatGPT Health. Which user prompts, integration paths, or data sources could lead to misguidance or harmful misinformation?" This lack of regulation and transparency is a major concern, as it leaves users vulnerable to potential misinformation and harm. As Ruani notes, "What worries me is that there are no published studies specifically testing the safety of ChatGPT Health."
The Development and Evaluation of ChatGPT Health
ChatGPT is developed by OpenAI, which used the tool HealthBench to develop ChatGPT Health. HealthBench employs doctors to test and evaluate how well AI models perform when responding to health questions. However, Ruani notes that the full methodology used by HealthBench, and its evaluations, are "mostly undisclosed, rather than outlined in independent peer-reviewed studies." This lack of transparency raises concerns about the reliability and accuracy of ChatGPT Health’s responses. An OpenAI spokesperson claims that the company has worked in partnership with over 200 physicians from 60 countries "to advise and improve the models powering ChatGPT Health."
The Potential Benefits and Risks of ChatGPT Health
Despite the concerns, some experts believe that ChatGPT Health could be useful in helping people manage well-known chronic conditions and research ways to stay well. Dr. Elizabeth Deveny, CEO of the Consumers Health Forum of Australia, notes that AI’s ability to give answers in different languages "provides a real benefit to people who don’t have English proficiency." However, she also warns that people may take advice from ChatGPT Health at face value, and that "large global tech companies are moving faster than governments, setting their own rules around privacy, transparency and data collection." As Deveny notes, "This is not a small not-for-profit experimenting in good faith. It’s one of the largest technology companies in the world."
The Need for Clear Guardrails and Transparency
The launch of ChatGPT Health highlights the need for clear guardrails, transparency, and consumer education to ensure that people can make informed choices about using AI for their health. Deveny warns that "we need clear guardrails, transparency and consumer education so people can make informed choices about if and how they use AI for their health." This is not about stopping AI, but about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind. As Ruani notes, "This isn’t about stopping AI. It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind."
https://www.theguardian.com/technology/2026/jan/15/chatgpt-health-ai-chatbot-medical-advice

