- The Key Patents of 10 Blockbuster Drugs to Expire in the Next 5 Years
- New Data Shows Rising Colorectal Cancer Mortality in Young Adults
- Arginine-Driven Metabolic Reprogramming in Liver Cancer: A Potential Therapeutic Target
- CAR-T Cell Therapy Achieves 100% Sustained Remission in Lupus Patients
- First TIL Therapy for Malignant Melanoma Approved
- Japan 10th COVID Wave: COVID-19 Hospitalizations Remain High
Application of computers in medical diagnosis
Application of computers in medical diagnosis. What software might a doctor use to diagnose the condition? They made these attempts.
Online tools are a potential tool that can help doctors. Although we have heard some anecdotes about the use of online searches to help make difficult diagnoses, this simple symptom finding has not been proven to be an accurate diagnosis.
One of the earliest symptom checkers used by doctors and current patients is the Isabel Symptom Checker, which covers more than 6000 diseases. When diagnosing a North American male patient in his 50s and 60s, after entering “cough” and “fever”, the program prompts “possible” diagnoses as influenza, lung cancer, acute appendicitis, pulmonary edema, relapsing fever, and atypical Pneumonia and pulmonary embolism. With the exception of influenza and atypical pneumonia, almost all of these diagnoses are easily ruled out because the patient’s symptoms are not related to these diseases.
In 2015, in a study published in the British Medical Journal, when 23 cases of symptom examination were evaluated, the correct diagnosis rate was only 34% after the information was input into the system. Despite the poor results, the number of mobile applications used to check symptoms has surged in recent years. Although they all incorporate artificial intelligence methods, they have not been proven to have the accuracy of simulating a doctor’s diagnosis, so we should not regard these as the gold standard. Start-up companies that design such applications have also begun to collect information outside of the symptom list, asking patients a series of questions, such as the patient’s health history. Repeated consultation can reduce errors and improve accuracy. Among them, an application called Buoy Health uses more than 18,000 clinical medical publications, 1,700 medical symptom descriptions, and data provided by more than 5 million patients.
However, the view that a correct diagnosis can be made through a series of symptoms seems too simple. When we listen to the patient’s main complaint, the symptoms are obviously not the existence or absence of the duality; on the contrary, the symptoms are subtle and subjective. For example, a patient with an aortic dissection may not describe their feelings as “chest pain”; during a heart attack, the patient can stretch out a clenched fist (Levin’s sign) to indicate that he has a feeling of pressure rather than pain, and It may be a burning sensation, and the patient does not feel pressure or pain. What’s more complicated is that for these diagnostic applications, symptoms are subjective, and how the patient communicates information through dictation, facial expressions, and body language is very important, and these are often difficult to capture in a few words.
The computer can also help obtain a second diagnosis, which helps to increase the probability of a correct diagnosis. In a Mayo Clinic study, researchers surveyed nearly 300 consecutive referral patients, and found that only 12% of patients had a second diagnosis consistent with the diagnosis of the referring doctor. To make matters worse, the second diagnosis is usually impossible to achieve, partly because of the additional costs, the difficulty of making an appointment for diagnosis, and even the lack of relevant medical experts. Although we are still weighing the pros and cons between face-to-face consultation and remotely letting more doctors participate in diagnosis and treatment opinions, telemedicine does make it easier for more doctors to participate in the diagnosis process.
In the late 20th century and early 21st century, when I was working at the Cleveland Clinic, we launched an online service called “MyConsult”. Now, this service has provided tens of thousands of different second diagnosis opinions, but many of them are at odds with the initial diagnosis results.
Doctors hope to crowdsource data with colleagues and seek diagnostic help to improve the accuracy of diagnosis. Although not entirely “second system thinking,” this approach uses reflective input and experience from multiple experts. In recent years, some smartphone apps for doctors have appeared on the market, including Figure One, HealthTap and DocCHIRP. Among them, Figure One is very popular. Doctors can share medical images to allow colleagues to assist in rapid diagnosis. My Scripps team recently published data on the Medscape Consult platform, which is currently the most widely used crowdsourced app for doctors in the United States. Within two years of its launch, the application has a steady growth of 37,000 doctor users, covering more than 200 countries and many professional fields, and the help requested can be quickly answered. Interestingly, the average age of users is over 60 years old.
HumanDx (Human Diagnosis Project) is a platform based on web and mobile applications that has been used by more than 6,000 doctors and interns from 40 countries. In a study comparing the results of more than 200 doctors with computer algorithms, the doctor’s diagnostic accuracy rate was 84%, while the computer algorithm’s accuracy rate was only 51%. Whether for doctors or artificial intelligence, this result is a bit frustrating, but with the support of many organizations, such as the American Medical Association, the American Council of Medical Specialties and other top medical committees, leaders hope to integrate doctors and machine learning. To improve the accuracy of diagnosis.
An anecdote shared by physician Shantanu Nundy gave us hope. Nongdi once participated in the consultation with a female patient in her 30s who had stiffness and joint pain. He was not sure if the patient was rheumatoid arthritis, so he posted the following information on HumanDx:
Female, 35 years old, with pain in hands and joint stiffness for 6 months, suspected of rheumatoid arthritis. He also uploaded photos of the patient’s inflamed hands. Within a few hours, several rheumatologists confirmed the correctness of the diagnosis. By 2022, HumanDx plans to recruit at least 100,000 doctors, and use artificial intelligence tools and doctor crowdsourcing methods to add natural language processing algorithm technology to direct key data to appropriate experts.
Another way to improve the diagnostic model through crowdsourcing is to incorporate public science. CrowdMed has developed a platform to establish a competitive relationship between doctors and laymen with economic incentives, allowing them to solve intractable diseases.
The method of admitting non-clinical doctors to participate in the diagnosis is novel and has produced unexpected results: the company’s founder and CEO Jared Heyman told me that the accuracy of layman’s diagnosis Sometimes even higher than the participating doctors. Our team at the Scripps Institute has not had the opportunity to check their data and confirm the accuracy of the final diagnosis. However, once it is confirmed, we may explain it as: laymen usually have more time to conduct in-depth research on the case, so as to find the correct answer in the complex case, which fully reflects the “slow work and careful work” and in-depth due diligence the value of.
The content of this article is excerpted from “Deep Medical Care” by Eric Top
(source:chinanet, reference only)