American medical journals have warned against using Chatgpt for health information after interacting with a chatbot.
An article in the “Online of Internal Medicine” reported a case in which a 60-year-old man developed brominism, also known as the toxicity of bromine after consulting chatgpt.
The article describes Bromism as a “recognized” syndrome in the early 20th century, when it was believed to cause one in ten psychiatric admissions.
The patient told the doctor that after reading about the negative effects of sodium chloride or milk salt, he consulted about eliminating changpt in diet and started taking sodium bromide within three months. Despite reading “Chlorides can be exchanged with bromide, although it may be for other purposes, such as cleaning”. In the early 20th century, sodium bromide was used as a sedative.
The author of the article comes from the author of the University of Washington, Seattle, and the case highlights “how the use of artificial intelligence can help prevent the development of adverse health outcomes.”
They added that since they were unable to access the patient’s Chatgpt conversation log, it was impossible to determine the advice the men received.
Nevertheless, when the author consulted ChatGPT on what chlorides could be replaced, the response also included bromide, no specific health warnings were provided, and no questions asked why the authors were seeking such information – “as we think medical professionals would do this,” they wrote.
The authors warn that Chatgpt and other AI applications may “produce scientific inaccuracy, lack the ability to critically discuss results, and ultimately facilitate the spread of misinformation.”
The company announced an upgrade to the chatbot last week and claimed that one of its biggest advantages is health. It says Chatgpt, now powered by the GPT-5 model, is better at answering health-related questions and will also be more proactive in “labeling potential problems” such as severe physical or mental illness.
However, it emphasizes that chatbots are not a replacement for professional help. The chatbot's guide also states that it is not suitable for diagnosing or treating any health conditions”.
The journal article, published last week before the release of GPT-5, said the patient appeared to have used an earlier version of Chatgpt.
The article acknowledges that AI may be a bridge between scientists and the public, but the technology also takes the risk of promoting “decontextualization information” and that it is highly unlikely that medical professionals recommend sodium bromide when patients request a replacement of meal salt.
As a result, doctors need to consider using AI when examining where patients get information, the authors say.
The authors said Bromism patient showed up in the hospital and claimed that his neighbors might be poisoning him. He also said he had multiple dietary restrictions. Despite his thirst, he was considered paranoid about the water provided.
He tried to escape from the hospital within 24 hours of his admission and received psychiatric treatment after being cut off. Once the patient stabilized, he reported several other symptoms indicating tan, such as facial acne, excessive thirst and insomnia.