Trust is Key to Successful AI Implementation in Radiology

Training radiologists in AI is critical to the ethics of practice


Kotter
Kotter

The successful implementation of artificial intelligence (AI) in radiology practice comes down to trust and transparency, as well as keeping humans involved in the process.

“The relationship between patients and medical doctors is based on trust,” said Elmar Kotter, MD, senior consultant in the department of radiology at Freiburg University Medical Center in Germany. “The introduction of AI will increase the complexity of the medical system, particularly as both physicians and patients are using AI to gather information.”

So how do we establish trust in AI? Key components include formal accreditation of the AI system, radiologists learning and using the system, and patients trusting the physician using AI.

To prepare radiologists to use AI, they need to be properly trained. They will need to understand how AI works and how to integrate it into practice, how to evaluate its performance as well as recognizing the ethical matters involved, said Dr. Kotter who presented, “Ethics of Practice,” during RSNA 2020 as part of the session, “Ethics of AI in Radiology: Summary of the European and North American Multisociety Statement.”

Automation Bias Influences Trust

Another factor that influences trust in AI is automation bias.

“Automation bias is a tendency for humans to favor machine-generated decisions and ignore contrary data or conflicting human decisions,” Dr. Kotter said.

Radiology can draw on lessons from the aviation industry and self-driving cars to help radiologists prepare for the inevitability of AI-enhanced imaging tools. Automation bias can lead to a lack of monitoring and over reliance on machine-generated results. For example, studies in aviation and analysis of incident reports recorded by the Aviation Safety Reporting System found that pilots frequently failed to monitor important flight indicators or to disengage the autopilot in cases of malfunction.

A recent paper published in Radiology: Artificial Intelligence analyzed 737 MAX disasters and noted five key lessons radiology can apply when building AI systems into radiology practice. Notably, the introduction of AI introduces new opportunities for failure, and those risks must be mitigated. Relatedly, the accuracy of inputs is as important as the accuracy of the AI algorithm itself. And the authors reinforce the importance of training radiologists on the system and its risks.

“We do not know today how the interaction between AI-based algorithms and radiologists will modify the decisionmaking of radiologists,” Dr. Kotter said. “There is a risk that resource-poor populations may be harmed to a greater extent by automation bias because there are fewer radiologists to veto the results.”

What is clear, according to Dr. Kotter, is that the increasing use of AI will result in systemic risks and increase the potential for error, ultimately raising the question of liability. Does responsibility for errors rest with those collecting the data, the company developing the AI tool, or the radiologist who signed the report?

We have seen this play out in the self-driving car industry. The Society of Automotive Engineers defined five levels of automation and where responsibility lies at each level. Dr. Kotter suggests radiology could develop similar definitions for the implementation of AI in radiology.

“Radiologists will need to acquire new skills to do their best for patients in the new AI ecosystem,” Dr. Kotter said. “We’ll remain ultimately responsible for patient care, which is based on trust.”

For More Information

Access the RSNA 2020 session, “Ethics of AI in Radiology: Summary of the European and North American Multisociety Statement,” RSNA2020.RSNA.org.

Read previous RSNA News articles on AI: