Doctor AI To The Rescue?

Multiple times this week, we’ve seen articles headlined with some hook about AI and Medical diagnosis. The came an article from a financial publication saying that AI Chatbots can have a better bedside manner than some human doctors! What? That sounds unbelievable on the surface, but if you are a woman, especially, or if you know a woman, and most significantly if that woman is a person of color, they almost certainly can tell you about being ignored, overlooked, and pushed out of a doctor’s office without having all their questions answered.

This is a problem that happens to women at three times the rate of men and almost five times more frequently if the patient is a person of color. Almost all women (over 90%) have had a complaint of pain ignored by a physician. Studies consistently show that Black patients’ pain, especially in Black women, is often underestimated and undertreated compared to white patients. Their reports of pain may be taken less seriously or attributed to exaggeration or drug-seeking behavior more often than similar reports from white men. This results in delayed or missed diagnosis. Women face higher risks for delayed or incorrect diagnoses for serious conditions ranging from heart disease (which can present differently in women) to various cancers and autoimmune disorders. Their symptoms may be attributed to stress, anxiety, or other non-physical causes more readily.  

One of the biggest challenges is that doctors are human, and with that comes both explicit and implicit biases. Healthcare providers, like all people, can hold unconscious biases based on race and gender. These biases can influence how they perceive a patient, interpret their symptoms, and make treatment decisions, often without conscious intent to discriminate. Historical stereotypes about Black individuals having higher pain tolerance or women being overly emotional can unconsciously affect clinical judgment. Doctors may sometimes spend less time with Black patients, interrupt them more often, and engage in less patient-centered communication compared to white patients. This can lead to patients feeling unheard and misunderstood and crucial diagnostic information being missed.

Since no one here has a medical diagnosis that is immediately terminal, we downloaded a chatbot and then asked it questions directly relating to symptoms that are common among brain tumors. This is an especially sensitive subject because a former IT specialist had tried telling her doctor that her headaches weren’t ‘normal.’ He told her to take more Tylenol. Two weeks later, she died of a massive brain hemorrhage while having dinner with her two little girls. That happened over 20 years ago and still pisses us off.

Here’s the first part of the chatbot’s response: It’s completely understandable to feel frightened when thinking about serious health concerns like a brain tumor. Knowing about potential signs can feel overwhelming, but please remember that the symptoms often associated with them can also be caused by many other, much more common, and less serious conditions. It’s important not to jump to conclusions based on general information.

The voice, while clearly electronic, was soft and soothing. When the chatbot asked for questions, it waited patiently for an answer. When it gave an answer, it would then ask if we understood. If we said no, or ‘I don’t understand,’ it would reformulate the response using less technical jargon. It never stopped to look at its watch. It never said something stupid such as, “Well, that’s a more complicated situation than we have time to explore today.”

After the chat ended, we checked its response. It was 93% accurate.

Now, seven percent is still really dangerous. That’s more than enough space to suggest something that could be fatal. However, the chatbot did not have access to personal medical history and was not aware of current medications taken. We could have included that information, but we wanted to see how the chatbot would respond without it.

AI technology has the potential to save time and money and reduce mistakes in a healthcare system facing chronic capacity shortages. But it turns out that AI may also be better, under some conditions, at providing the most human parts of doctoring — compassion and empathy. This revelation, supported by a growing body of research, is reshaping what patients expect of their doctors and, increasingly, how doctors interact with the people they care for.

It can be relatively cheap to gather a lot of bio-signal data. Researchers can organize a study and ask participants to use a wearable device akin to a smartwatch for a few days. However, to teach a machine learning algorithm to find a relationship between a specific bio-signal and a health disorder, you first need to teach the algorithm to recognize that disorder. Engineers have been working on just that problem, and the results are moving quicker than anyone had expected. In fact, chances are that you are already wearing an AI-powered medical device. It’s called a smartwatch.

Many commercial smartwatches, such as ones by Apple, AliveCor, Google and Samsung, currently support atrial fibrillation detection. Atrial fibrillation is a common type of irregular heart rhythm, and leaving it untreated can lead to a stroke. One way to automatically detect atrial fibrillation is to train a machine learning algorithm to recognize what atrial fibrillation looks like in the data.

To move the training along more quickly,  researchers have been developing new ways to train machine learning algorithms with fewer labels. By first training a machine learning model to fill in the blanks of large-scale unlabeled bio-signal data, the machine learning model is primed to learn the relationship between a bio-signal and a disorder with fewer labels. This is called pretraining. Pretraining even helps a machine learning model learn a relationship between a bio-signal and a disorder when it is pre-trained on a completely unrelated bio-signal.

When Dr. Jonathan Chen, who leads a Stanford University research group that studies AI applications in health care, first got access to the latest version of ChatGPT, he tried his best to get it into saying something ridiculous or stupid. “After a few turns of the conversation, it was very disorienting,” Chen said. “It was like, ‘Holy crap, this chatbot is providing better counseling than I did in real life.’”

Chen recalls thinking, “This weird human-computer interaction I’m having is allowing me to practice a high-stakes conversation in a low-stakes environment. Ironically, I think it helped me improve the most human skills I need to be a good doctor.”

Part of the reason that AI can feel as if it’s doing a better job than a human doctor is because the healthcare system is so completely fucked up by insurance companies and middlemen forcing doctors to spend more time with paperwork, even if it’s online, than with actual patients. In the US, the average primary care visit is less than 20 minutes, and doctors are often obliged by compulsory electronic health record systems to spend as much or more time on computers — filing paperwork, making referrals, ordering medications, transcribing notes — as they do interacting with patients in person. All of this adds up to patients feeling neglected.

Some AI Integration is already happening. “Ambient documentation,” in which computer programs listen to patient exams and generate notes for the medical record, is growing in popularity among doctors. That after-care summary your doctor sends you after a procedure or exam? It’s increasingly likely to be AI-generated. You should read them and call your doctor’s office immediately if you find an error.

“AI is not going to replace physicians, but physicians who know how to use AI are going to be at the top of their game going forward,” says Dr. Bernard Chang, dean for medical education at Harvard Medical School. The technology, he says, “will allow doctors to be more human in the future.”

Be advised that you’ll likely know if AI ever develops to the point that it can prescribe medicines or procedures for you without a real doctor signing off on it. Laws in all 50 states and most foreign countries require that only a board-certified physician can order meds, call for tests to be done (especially MRIs and X-rays), and translate test results. But we’ve noticed that some AI learning software is already being installed in some hospitals.

There’s still nothing that replaces a patient who is persistent and demanding about their own medical care. It’s okay to be rude. It’s okay to get a second, third, or fourth opinion on a diagnosis. But if you need help understanding a diagnosis, medical terms, or interpreting test results, chances are pretty good that AI has you covered. And you don’t have to file insurance for the service.


Discover more from Chronicle-Ledger-Tribune-Globe-Times-FreePress-News

Subscribe to get the latest posts sent to your email.

More From Author

The Changing Face Of Fear

A True Spring Cleaning

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.