Asking a health advice from a chatbot? After the recent launch of OpenAI's ChatGPT Health service, there are several important factors to consider.

Владислав Вислоцкий Exclusive
VK X OK WhatsApp Telegram
The material has been prepared by the K-News editorial team. All rights to copy and partially use the material are protected; permission from the K-News editorial team is required.

With the launch of ChatGPT Health by OpenAI, several important aspects arise that should be considered before turning to AI chatbots for medical advice.

Given that millions of users are already seeking advice from chatbots, it was only a matter of time before tech companies began developing specialized solutions for health-related inquiries.

In January, OpenAI introduced an updated version of its chatbot — ChatGPT Health, which, according to the company, is capable of analyzing medical records and data from wearable devices to provide answers to medical questions.

Currently, access to the program is by appointment only. Competing company Anthropic also offers similar features for users of its chatbot Claude.

Both companies emphasize that their developments are large language models that cannot replace qualified medical care and should not be used for diagnosis. Instead, chatbots are capable of summarizing complex test results, helping prepare for doctor visits, and identifying significant changes in health status based on medical records.

However, how safe and accurate are these technologies in analyzing health data? Is it really worth relying on them?

Here are a few points to consider before discussing your health with AI:

Personalized information from chatbots may surpass Google

Some specialists who have worked with ChatGPT Health and similar platforms consider them a significant step forward in the field of medical information.

Although AI systems are not perfect and can sometimes provide incorrect advice (source in English), they often offer more personalized and specific responses than Google search results.

“Often, the patient has no alternative, or they are guessing,” notes Dr. Robert Wachter, a medical technology expert from the University of California, San Francisco. “Therefore, if these tools are used responsibly, they can provide real benefits.”

In countries like the UK and the US, where waiting times for doctor appointments can last for weeks and patients can wait hours in emergency rooms, chatbots help avoid unnecessary panic and save time.

Moreover, they can indicate the need for immediate medical attention if symptoms are dangerous.

One of the advantages of new chatbots is their ability to take into account users' medical histories, including current medications, age, and medical records.

Even if you do not share your medical data, Wachter and other experts advise providing as many details as possible to make the responses more accurate.

Do not turn to AI for alarming symptoms

Wachter and his colleagues emphasize that in some cases, it is necessary to seek medical help immediately rather than rely on a chatbot. Symptoms such as shortness of breath, chest pain, or severe headaches may indicate serious health issues.

Even in less critical situations, both patients and doctors should approach AI programs with caution, notes Dr. Lloyd Minor from Stanford University.

“When it comes to serious medical decisions or even less significant health issues, one cannot rely solely on the information provided by large language models,” adds Minor, dean of Stanford's medical school.

Even in the case of common conditions like polycystic ovary syndrome, it is better to consult a real doctor, as the disease can manifest differently in different people, affecting treatment choices.

Consider privacy when uploading health data

Many of the benefits offered by AI bots depend on users sharing personal medical information. It is important to remember that all data transmitted to the company developing the AI is not protected by federal privacy laws in the US that regulate the handling of sensitive medical data.

The law, known as HIPAA, imposes fines and criminal penalties on doctors, hospitals, and other medical institutions for disclosing medical information. However, it does not apply to companies creating chatbots.

“When a person shares their medical record with a model, it is not the same as sharing it with a doctor,” explains Minor. “Consumers should understand that the privacy standards in these cases are completely different.”

OpenAI and Anthropic claim that users' health data is stored separately and protected by additional measures. The companies do not use medical information to train their models. Users must separately consent to the sharing of such data and can withdraw their consent at any time.

While interest in AI is high, independent research on such technologies is still in its early stages. Initial results show that programs like ChatGPT perform well on complex medical exams but often struggle in interactions with live people.

A recent study by the University of Oxford involving 1,300 participants showed that those who used AI chatbots to seek information about hypothetical diseases did not make more informed decisions than those who relied on traditional online searches or their own judgments.

When chatbots were presented with medical scenarios in written form, they correctly identified the underlying condition 95% of the time.

“That was not the problem,” comments lead author Adam Mahdi from the University of Oxford. “The issues arose during interactions with real people.”

Mahdi and his team identified several difficulties in communication. People often did not provide chatbots with enough information for accurate assessment of the problem, and the systems frequently gave a mix of correct and incorrect answers, making it difficult for users to distinguish between them.

The study conducted in 2024 did not cover the latest versions of chatbots, including ChatGPT Health.

An AI second opinion can be helpful

The ability of chatbots to ask clarifying questions and extract key information from users is an area where, according to Wachter, there is still much room for improvement.

“I believe they will become truly effective when their communication with patients is more ‘medical’ and the dialogue resembles a real conversation,” says Wachter.

In the meantime, one way to increase confidence in the information received is to consult multiple chatbots, just as patients sometimes seek a second opinion from another doctor.

“Sometimes I enter the same data into ChatGPT and Gemini,” shares Wachter, referring to Google’s AI tool. “And when their responses match, I feel more confident that it is the correct answer.”

The record Asking for health advice from a chatbot? Keep in mind a few important factors after the launch of ChatGPT Health by OpenAI was published on K-News.
VK X OK WhatsApp Telegram

Read also: