1685955556 Doctor ChatGPT artificial intelligence heads and tails in consultation

Doctor ChatGPT: artificial intelligence heads and tails in consultation

Doctor ChatGPT artificial intelligence heads and tails in consultation

When asked by a citizen about the risk of dying after swallowing a toothpick, there are two answers. The first suggests that between two and six hours after ingestion, it’s likely already in the intestines, and moreover, many people swallow chopsticks with no harm. However, he warns against leaving if you have a “stomach ache” in emergencies. The second answer goes along the same lines, insisting that while worrying is normal, serious damage after swallowing a toothpick is unlikely as it is made of wood, which is non-toxic and non-poisonous and is a utensil. few; However, if you have “stomach pains, difficulty swallowing, or vomiting,” see a doctor, he adds. “It’s understandable to be paranoid, but try not to worry too much,” he says.

The two answers basically say the same thing but differ in form. One is more aseptic and concise; another, more sensitive and detailed. The first was created by a doctor in his own handwriting and the second by ChatGPT, the generative artificial intelligence (AI) tool that has revolutionized the planet in recent months. The study, published in the journal Jama Internal Medicine, to which this experiment belongs, aimed to address the role that AI assistants could play in medicine, comparing the responses of real doctors and the chatbot to health questions asked by citizens in a study had put Internet forum. The conclusion, after the responses were analyzed by an external panel of health professionals who did not know who answered what, was that ChatGPT’s explanations were more insightful and of higher quality 79% of the time.

The explosion of new AI tools around the world has opened debate about their potential in healthcare as well. The ChatGPT, for example, is trying to find its place as an aid to healthcare workers in developing medical procedures or to avoid bureaucratic tasks and is already planning at street level as a possible replacement for the inaccurate and often stupid Doctor Google. The experts consulted note that it is a technology with great potential, but still in its infancy: the regulatory area still needs to be refined to ensure its application in real medical practice, to resolve any ethical doubts that may arise and above all: assume that it is a fallible tool, and that can be wrong. Anything that emerges from this chatbot must always be finalized by a medical professional.

More information

Paradoxically, the most empathetic voice in the Jama study of internal medicine is the machine, not the human. At least in the written answer. Josep Munuera, head of the diagnostic imaging service at Barcelona’s Sant Pau Hospital and an expert in digital healthcare technologies, warns that the concept of empathy is broader than this study can reveal. Written communication is not the same as face-to-face communication, nor are doubts expressed in the context of a social network than in the context of a consultation. “When we talk about empathy, we talk about many topics. “Right now it’s difficult to replace non-verbal language, which is very important when a doctor needs to talk to a patient or their family,” he points out. However, he acknowledges the potential of these generative tools to anchor medical jargon, for example: “In written communication, medical jargon can be complex and we may have difficulty translating it into understandable language.” Probably find these algorithms determine the equivalence between the technical term and another word adapted to the recipient.”

Joan Gibert, bioinformatician and benchmark in the development of AI models at Barcelona’s Hospital del Mar, adds another variable when assessing the machine’s potential empathy towards the doctor. “The study mixes two concepts that feed into the equation: the ChatGPT itself, which can be useful in certain scenarios and has the ability to concatenate words that make us feel more empathetic, and physician burnout “The emotional exhaustion of caring for patients doesn’t leave doctors the opportunity to be more empathetic,” he explains.

The danger of “hallucinations”

In any case, like the famous doctor Google, you must always be careful with the responses ChatGPT elicits, no matter how sensitive or friendly they may seem. Experts consider that the chatbot is not a doctor and can fail. Unlike other algorithms, ChatGPT is generative, meaning it creates information from the databases it was trained on, but can invent some of the responses it elicits. “You always have to keep in mind that it is not a stand-alone unit and cannot serve as a diagnostic tool without supervision,” emphasizes Gibert.

These chats can suffer from so-called “hallucinations,” explains bioinformatician del Mar: “Depending on the situation, he’ll tell you something that isn’t true.” The chat puts words together in a way that makes them coherent, and because it contains a lot of information, can it be valuable. But it needs to be verified, because if not, it can fuel the false positives.” Munuera also stresses the importance of “knowing the database that trained the algorithm, because if the databases aren’t sufficient, the answer will be insufficient be.”

“You have to understand that if you ask him to diagnose you, he may be making up a disease.”

Josep Munuera, Sant Pau Hospital in Barcelona

On the road, ChatGPT’s uses in the healthcare field are limited as the information provided can lead to errors. Jose Ibeas, nephrologist at the Parc Taulí Hospital in Sabadel and secretary of the Big Data and Artificial Intelligence group of the Spanish Society of Nephrology, points out that it is “useful for the first layers of information because it synthesizes information and help, but. ‘ When you go into a more specific area, the benefit is minimal or false in more complex pathologies.’ Munuera agrees, emphasizing that ‘it’s not an algorithm to help dispel doubts.’ ‘You have to understand that he may be making up a disease when you ask him to give you a differential diagnosis,” he warns. And in the same way, the algorithm can respond to a citizen’s doubts by concluding that it is nothing serious, although in reality it is so: a chance of healthcare may be lost because the person with the answer of the chatbot and does not consult a real doctor.

Experts find more room for maneuver in these applications as a support tool for health professionals. For example, to answer patient questions in writing, but always under the supervision of the doctor. The Jama Internal Medicine study suggests that this would help improve “workflow” and patient outcomes: “Having more patient questions answered quickly, empathetically, and to a high standard could reduce unnecessary clinic visits and free up resources for release those who need them.” them. Additionally, messaging is a critical resource for promoting patient equity, as people with limited mobility or irregular work schedules are more likely to use messaging,” the authors agree.

The scientific community is also exploring using these tools for other repetitive tasks, such as covering sheets and reports. “Based on the premise that everything always, always, always needs to be checked by the doctor,” Gibert emphasizes, assisting with bureaucratic tasks – repetitive but important – frees doctors time to focus on other issues, such as their own patient. For example, an article published in The Lancet discusses the potential for streamlining discharge reports: Automating this process could reduce the workload and even improve the quality of reports, although, the authors say, they are aware of the difficulties the algorithms face with large ones volumes to train databases and, among other things, the risk of “depersonalization of attention”, which could lead to resistance to this technology.

Ibeas insists that this class of tools must be “validated” for any medical use and that the division of responsibilities must be clearly defined: “The systems will never decide.” The one who always ends up signing is the doctor,” he says.

ethical issues

Gibert also points out some ethical considerations to take into account when introducing these tools into clinical practice: “You need this type of technology under one legal umbrella, you need to have integrated solutions within the hospital structure and make sure that the Data from.” Patients are not used to retrain the model. And if someone wants to do the latter, they should do it within a project, with anonymized data and in compliance with all controls and regulations. Sensitive patient information must not be impolitely shared.”

The bioinformatician also points out that these AI solutions such as ChatGPT or models that help with diagnosis bring “prejudices” into everyday medical practice. This, for example, influences the doctor’s decision in one way or another. “The fact that the professional has the result of an AI model changes the evaluator himself: the nature of the relationship can be very good, but it can lead to problems, especially for professionals with less experience.” Therefore, the process must be parallel take place: Until the specialist makes the diagnosis, he cannot see what the AI ​​is saying.”

A group of Stanford University researchers also reflected on how these tools can help further humanize healthcare in an article in Jama Internal Medicine: “The practice of medicine is much more than processing information and connecting words with concepts; is to give meaning to these concepts while connecting with patients as a trusted partner in building a healthier life. We can hope that new AI systems can help tackle the tedious tasks that are overwhelming modern medicine and enable clinicians to refocus on treating human patients.”

Munuera is waiting to see how this fledgling technology will spread and what impact it will have on the public: “You have to understand that.” [ChatGPT] It is not a medical device and there is no doctor who can certify the correctness of the answer. You have to be careful and understand the limits.” In summary, Ibeas continues: “The system is good, robust, positive and it’s the future, but like any tool you have to know how to use it so that it doesn’t become a weapon .”

you can follow THE COUNTRY Health and well-being on Facebook, Twitter and Instagram.