April 26, 2024

Medical Trend

Medical News and Medical Resources

Who will take responsibility if medical artificial intelligence makes a mistake?

Who will take responsibility if medical artificial intelligence makes a mistake?

 

 

Who will take responsibility if medical artificial intelligence makes a mistake?  If medical artificial intelligence makes a mistake, who is to blame? WHO publishes guidelines.  

 


On June 28, the World Health Organization (WHO) issued a guide “Ethics and Governance of AI Use in Medical and Health”. This guide has more than 160 pages, expounds the application of AI in the medical field, applicable laws and policies, key ethical principles and corresponding ethical challenges, accountability mechanisms and governance frameworks, and it consists of nine chapters.

 

According to the WHO, this is the first comprehensive international guideline based on ethics and human rights standards in the field of medical AI.

 

Who will take responsibility if medical artificial intelligence makes a mistake?

 

The guidelines were completed by the WHO digital health and health innovation and research team, and took two years. At the same time, WHO worked with a 20-person expert leadership group to determine 6 core principles to improve the ethical level of AI in the medical and health field. This is also the first batch of consensus principles in this field.

 

The six core principles are:

 

1. Protection of autonomy

2. Promote human well-being, human security and public interest

3. Ensure transparency, interpretability and understandability

4. Cultivate a sense of responsibility and accountability

5. Ensuring inclusion and fairness

6. Promote responsive and sustainable AI development

 

The WHO pointed out that the use of AI technology in medical and health has great prospects, and it will be applied to four aspects: medical care, health research and drug development, health system management and planning, public health and its monitoring. Based on the content of the guidelines, the “medical community” has sorted out some ethical issues that may arise when using AI in the medical field and their countermeasures.

 

 

 

Will AI cause medical staff to lose their jobs?


“If we get it right, 30 years later, doctors should be unable to find jobs, hospitals are declining, and pharmaceutical factories are a lot less.” At the World Internet Conference in November 2014, Alibaba was then chairman of the board of directors. Jack Ma announced that health will become one of Ali’s two major industries in the future.

 

In the guide “Ethics and Governance of AI Use in Health Care” released this time, WHO has repeatedly mentioned Google, Facebook, and Amazon in the United States, as well as Internet technology companies such as Tencent, Alibaba, and Baidu in China. Related Chinese platforms provide users with online medical information, etc., benefiting millions of people in China.

 

When explaining the trend of AI use in clinical care, WHO took China as an example, “In China, the number of telemedicine providers has increased by nearly four times during the COVID-19 pneumonia pandemic.” This can improve the shortage of medical resources and insufficient medical staff. status quo. At the same time, patients with chronic diseases and other types can also use AI to better manage themselves and reduce the demand for medical human resources.

 

In the future, will doctors be unable to find jobs as Ma Yun said? In “The Ethics and Governance of AI Use in Health Care”, the optimistic view is that AI will reduce the daily workload of clinicians, allowing them to spend more time with patients and focus on more challenging tasks; People will perform other roles, such as marking data or designing and evaluating AI technology, so they will not lose their jobs.

 

Pessimistic view: AI will automate many tasks and tasks of medical staff. The use of AI in a large number of jobs will lead to short-term instability. Even if new jobs are created and overall employment is increased, many jobs in certain fields will be lost, and those who are not qualified for new jobs will lose their jobs.

 

Responses:

 

Almost all medical and health jobs require a minimum level of numbers and technical proficiency. Studies have pointed out that within 20 years, 90% of the work of the British National Health Service (NHS) requires digital skills. Doctors must improve their abilities in this area, and at the same time communicate more with patients about the risks of using AI, make predictions, and discuss trade-offs, including the moral and legal risks of using AI technology.

 

It should also be noted that reliance on AI systems may weaken the ability of humans to make independent judgments. In the worst case, if the AI ​​system fails or is damaged, medical staff and patients may not be able to take action. Therefore, a strong plan should be developed to provide support when the technical system fails or is destroyed.

 

In fact, many high-, middle- and low-income countries are currently facing a shortage of health workers. WHO estimates that by 2030, there will be a shortage of 18 million health workers, mainly in low- and middle-income countries. Also, the clinical experience and knowledge of the patient is of utmost importance. Therefore, AI will not replace the duties of clinicians in the foreseeable future.

 

 

 

What should I do if AI and doctors have a “peer disagreement”?
How to ensure the autonomy of doctors?


In terms of medical care and diagnosis, AI is widely used in radiology and medical imaging, but it is still relatively new and has not been routinely used in clinical decision-making. Currently, AI is being evaluated for the diagnosis of tumors, ophthalmology, and lung diseases, and may be used to detect stroke, pneumonia, cervical cancer and other diseases in a timely manner through imaging, echocardiography, etc.; it may be used to predict cardiovascular diseases, diabetes and other diseases Or the occurrence of a major health incident. In clinical nursing, AI can predict some disease progression and drug resistance.

 

In some aspects of clinical care, it is good for AI to replace human judgment: Compared with machines, humans may make more unfair, biased, and worse decisions. When machines can execute decisions more quickly, accurately, and sensitively, handing them over to humans may mean that some patients will suffer avoidable morbidity and death.

 

At present, in the field of medical and health, the decision-making power has not been completely transferred from people to machines. Paradoxically, in the case of “peer disagreements” between AI and doctors, if the doctor ignores the machine, AI has no value; if the doctor chooses to fully accept AI’s decision, this may weaken its authority and responsibility.

 

A situation even worse than “peer divergence” may be emerging: AI systems are replacing humans as cognitive authorities, and conventional medical functions may be completely handed over to AI. If appropriate measures are not taken, this will undermine human autonomy. Humans may neither understand how AI makes decisions, nor negotiate with AI to reach a negotiated decision.

 

Responses:

 

In the context of medical AI, autonomy means that humans should continue to fully control the medical and health system and medical decision-making. In practice, it should be possible to decide whether to use AI systems for specific medical decisions, and to prioritize decisions when appropriate. This ensures that clinicians can override decisions made by AI systems, making them “essentially reversible.”

 

In addition, AI should be transparent and interpretable. Medical and health institutions, health systems, and public health institutions should regularly announce the reasons for using some AI technologies, and regularly evaluate AI information to avoid the appearance of “algorithm black boxes” that even developers cannot understand.

 

 

 

Who is responsible for making mistakes when using AI?


According to a report from Nihon Keizai Shimbun in July 2018, the Japanese government will improve a series of rules on AI medical equipment, stipulating that the ultimate responsibility for diagnosis shall be borne by doctors: due to the possibility of misdiagnosis of AI, AI medical equipment is positioned as auxiliary equipment , Based on the “Physician Law” stipulates that “the responsibility for making the final diagnosis and determining the treatment policy shall be borne by the doctor.” By clarifying the scope of such responsibilities, manufacturers are encouraged to develop AI medical equipment.

 

When the news came out, it caused great controversy. The guidelines issued by the WHO pointed out that, as a whole, if the basic data are both accurate and representative, AI may reduce the rate of misdiagnosis. However, when a misdiagnosis occurs, is it reasonable for the doctor to take responsibility?

 

The WHO’s answer is no. First, clinicians will not control AI technology. Second, because AI technology is often opaque, doctors may not understand how AI systems transform data into decisions. Third, the use of AI technology may be due to the preference of the hospital system or other external decision makers, rather than the choice of clinicians.

 

The guide points out that certain characteristics of AI technology affect the concept of responsibility and accountability, and may cause the problem of “responsibility gap”: because AI will develop itself, not every step is artificially set, developers and designers will claim I am not responsible for it, so that the risk of harming patients is all placed on medical workers who are closer to AI, which is unreasonable.

 

The second challenge is the “traceability” of hazards, which has also plagued the medical decision-making system for a long time. Since the development of AI involves the contributions of many departments, it is difficult to assign responsibilities legally and morally. In addition, ethics guidelines are often issued by technology companies, lacking authoritative or legally binding international standards; monitoring companies’ compliance with their own guidelines is often done internally, with little transparency, no third-party enforcement, and no legal effect. .

 

Responses:

 

If clinicians make mistakes when using AI technology, they should check whether anyone in their medical training is responsible. If there is a wrong algorithm or data used to train AI technology, the responsibility may fall on the person who develops or tests the AI ​​technology. However, clinicians should not be completely exempted from responsibility. They cannot simply stamp the machine recommendations and ignore their professional knowledge and judgment.

 

When AI-based medical decisions hurt individuals, accountability procedures should clarify the relative roles of manufacturers and clinicians. Assigning responsibilities to developers can encourage them to minimize harm to patients. Other manufacturers, including drug and vaccine manufacturers, medical device companies and medical device manufacturers, also need to clarify their responsibilities.

 

When deciding to use AI technology in the entire medical system, developers, institutions, and doctors may all play a role in medical injuries, but no one is “full responsibility.” In this case, the responsibility may not be borne by the provider or developer of the technology, but by the government agency that selects, validates, and deploys the technology.

 

In the guidelines, WHO also mentioned ethical issues such as public privacy, the responsibilities of commercial technology companies, the biases and errors in algorithm allocation, and climate change caused by carbon dioxide emissions from machines. WHO called for the establishment of a framework for medical AI governance, and made recommendations on data governance, private and public sector governance, policy observation and model legislation, and global governance.

 

“Our future is a competition between the’increasing technological power’ and the’wisdom that we use it.'” In the guide, its chief scientist Dr. Sumia Swaminathan quoted a quote from Stephen. Hawking’s words.

 

 

 

Who will take responsibility if medical artificial intelligence makes a mistake?

Who will take responsibility if medical artificial intelligence makes a mistake?

(source:internet, reference only)


Disclaimer of medicaltrend.org


Important Note: The information provided is for informational purposes only and should not be considered as medical advice.