OpenAI’s Model Matches Doctors in Assessing Eye Conditions
- Early Biomarker for Multiple Sclerosis Development Identified Years in Advance
- Aspirin Found Ineffective in Improving Recurrence Risk or Survival Rate of Breast Cancer Patients
- Child Products from Aliexpess and Temu Contain Carcinogens 3026x Over Limit
- Daiichi Sankyo/AstraZeneca’s Enhertu Shows Positive Results in Phase III DESTINY-Breast06 Clinical Trial
- Mn007 Molecules Offer Potential for Combating Streptococcus pyogenes Infection
- Popular Indian Spices Banned in Hong Kong Over Carcinogen Concerns
OpenAI’s Model Matches Doctors in Assessing Eye Conditions
- AstraZeneca Admits for the First Time that its COVID Vaccine Has Blood Clot Side Effects
- Was COVID virus leaked from the Chinese WIV lab?
- HIV Cure Research: New Study Links Viral DNA Levels to Spontaneous Control
- FDA has mandated a top-level black box warning for all marketed CAR-T therapies
- Can people with high blood pressure eat peanuts?
- What is the difference between dopamine and dobutamine?
- How long can the patient live after heart stent surgery?
OpenAI’s Model Matches Doctors in Assessing Eye Conditions
According to research, OpenAI’s latest artificial intelligence model is nearly on par with expert doctors in analyzing eye conditions, highlighting the technology’s potential in the medical field. A paper published this Wednesday showed that the GPT-4 model, supported by Microsoft-backed startup, performed as well as or better than all doctors except the top-ranked specialist in evaluating eye issues and suggesting treatment recommendations.
Ophthalmology has been a focal point for applying artificial intelligence in clinical settings and addressing its application barriers, such as the tendency for models to generate “hallucinations” through fictitious data. “This work demonstrates that these large language models now have knowledge and reasoning capabilities in eye health that are nearly indistinguishable from experts,” said Arun Thirunavukarasu, the lead author of a paper published in the journal PLOS Digital Health.
“We’ve seen the ability to answer fairly complex questions. The study used 87 different patient scenarios to test GPT-4’s performance against non-specialist junior doctors, trainee ophthalmologists, and expert ophthalmologists. The paper states that the model outperformed junior doctors and achieved results similar to many experts.
Researchers say the study is notable for comparing the capabilities of an artificial intelligence model to those of practicing physicians rather than comparing it to test results. It also leveraged the broad capabilities of generative artificial intelligence, rather than the narrow abilities tested in some previous AI medical studies, such as diagnosing cancer risk through patient scans. The model performed equally well on questions requiring first-order memory and those requiring high-level reasoning (such as interpolation, explanation, and the ability to manipulate information).
Thirunavukarasu conducted this research during his time at the University of Cambridge School of Clinical Medicine and is currently working at the University of Oxford. He believes that the model could be further improved by expanding the dataset (including managing algorithms, de-identifying patient notes, and textbooks). This requires balancing the quantity and nature of expanded information sources while ensuring the quality of information, achieving a “tricky balance” between the two.
Potential clinical applications could include triaging patients or using it in situations where specialist medical personnel are limited. There is evidence that AI aids in diagnosis, such as detecting early-stage breast cancer that may be overlooked by doctors, leading to increased interest in deploying AI in clinical settings. Meanwhile, researchers are also working on addressing how to control severe risks, considering the potential harm of misdiagnoses to patients.
Pearse Keane, Professor of Ophthalmology at University College London and a member of Moorfields Eye Hospital in London, said that this latest study is “exciting,” and the idea of using AI to benchmark expert performance is “super interesting.” However, Keane believes that more work needs to be done before these technologies are introduced into clinical practice.
He cited an example from his own research last year: asking a large language model about questions regarding macular degeneration, the model provided answers that were essentially “fabricated” references. “We must strike a balance between excitement for this technology and its potentially enormous benefits… there should be at least caution and skepticism,” he said.
OpenAI’s Model Matches Doctors in Assessing Eye Conditions
(source:internet, reference only)
Disclaimer of medicaltrend.org
Important Note: The information provided is for informational purposes only and should not be considered as medical advice.