Artificial intelligence chatbots are increasingly being used to answer medical questions, prompting experts to urge caution as technology companies introduce tools designed to analyse personal health data.
The trend gained momentum earlier this year when OpenAI introduced ChatGPT Health, a specialised version of its chatbot that can analyse medical records, wellness app data and information from wearable devices. The system is currently available through a waiting list while the company expands access.
Another technology firm, Anthropic, offers comparable health-related features through its Claude chatbot for some users. Both companies say their systems rely on large language models and are not intended to replace doctors or provide formal medical diagnoses.
Instead, the tools are designed to explain complex test results, review health data and help patients prepare questions before visiting a doctor. They can also identify patterns in health records or activity data that might otherwise go unnoticed.
The rise of such services reflects the growing number of people seeking advice from AI tools. Hundreds of millions of users already turn to chatbots for information on everyday topics, and health questions have become a common part of that activity.
Some doctors believe AI chatbots could provide benefits if used carefully. They say the systems can often give more personalised responses than a standard internet search.
Dr Robert Wachter, a medical technology specialist at the University of California, San Francisco, said the tools may help patients who would otherwise rely on guesswork.
“The alternative often is nothing, or the patient winging it,” Wachter said. “And so I think that if you use these tools responsibly, I think you can get useful information.”
In countries such as the United States and the United Kingdom, where patients may wait weeks for a routine appointment or spend hours in urgent care clinics, chatbots may help reduce unnecessary worry by providing quick explanations for minor symptoms.
However, medical experts stress that serious symptoms should never be assessed by AI alone. Shortness of breath, chest pain or severe headaches could indicate medical emergencies that require immediate professional attention.
Dr Lloyd Minor, dean of Stanford University’s medical school, said people should approach AI tools with caution even in less urgent situations.
“If you’re talking about a major medical decision, or even a smaller decision about your health, you should never be relying just on what you’re getting out of a large language model,” he said.
Privacy is another major concern. Many AI health features require users to upload sensitive medical data in order to analyse it. Unlike hospitals and insurance companies, chatbot developers are not bound by the same federal privacy laws that protect medical records in the United States.
Those regulations, known as HIPAA, apply to healthcare providers but not to technology companies that operate chatbots.
Researchers also say the technology is still evolving. Studies show that AI systems can perform well on medical exams but sometimes struggle when interacting with real users who may provide incomplete or unclear information.
Experts say people who use AI chatbots for health questions should treat the responses as guidance rather than medical advice and confirm important information with qualified professionals.
