Artificial Intelligence Adds Spectrum of Value to Radiology

At RSNA 2018, leading experts discussed the spectrum of capabilities for AI that continues to permeate every area of health care

Dr. Li

This is the first of a four-part series about AI in radiology. Read parts two, three and four


Artificial Intelligence Will Enhance the Humanity of Health Care


Artificial intelligence (AI) has a far more important role to play in health care than replacing experts, according to Fei-Fei Li, PhD. AI will instead assist and enhance the work of clinicians.
During her RSNA 2018 lecture, Dr. Li, professor in the Stanford University Computer Science Department and co-director of the upcoming Stanford Human-Centered AI Institute, discussed her research into endowing health care spaces with ambient intelligence. By infusing AI into the physical space of a health care delivery system, she said, we then can find opportunities to improve. 
“People are physical bodies being cared for in highly complex, physical spaces in a 24/7 environment,” Dr. Li said. “Using ambient intelligence can allow health care to become safer and offer higher quality care and delivery for the patient while optimizing workflow for the physician.”
Dr. Li’s team is working toward a truly ambient intelligence in health care environments by investigating a variety of ways in which AI can learn and ultimately enhance workflows. She described three key “ingredients” to their work: sensing, recognizing activities and integrating clinical data into the ecosystem.
She described the use of depth sensors, a technology that detects activity while preserving privacy. By placing sensors through¬out a physical space, data can be collected about activity over time and space. For example, her team has employed sensors to monitor hand washing activity, creating a remarkably accurate record of hygiene compliance in a hospital. In another project, the team is looking at the variability of activities that take place in an ICU. 
“I don’t have to convince you how important it is to improve care and control costs in the ICU,” she said.
Somehow AI needs to be able to recognize the densely and simultaneously occurring activities and behaviors that occur in a health care space and that are within our range as humans, she explained. This is complex because the ways that humans move through space can be recognized differently by computers.

Ambient Intelligence Could be Infused Hospital-Wide

Eventually, Dr. Li believes that AI can get to a place where it can detect and reason the type of actions taking place and predict the activity that should take place next. For example, in senior care, Dr. Li noted the importance of patient mobility and understanding how a patient is moving or, in the case of a fall, not moving. 

Dr. Li also discussed how to integrate all this data into the broader clinical ecosystem. “All this collected data needs to be integrated into an ecosystem that makes it available so that humans can understand the decision-making processes taking place in a health care setting to ultimately benefit the delivery of health care,” Dr. Li said.
In the future, she sees ambient intelligence being infused into whole hospitals where sensors are monitoring every activity and where AI will move beyond recognition to forecasting health care activities in order to improve efficiency and accuracy.
The future of AI is in collaboration with humans, and clinicians working together with AI agents will ultimately enhance the humanity of health care, she said.
“The heart of health care is humans caring for one another,” Dr. Li concluded. “We are working to enhance the interaction between patients and clinicians and let clinicians focus on providing care.”

Dr. Kahn
Dr. Hsu

Experts Share the Latest Findings on Informatics Research


Judging by the quality and quantity of papers published in major medical journals — and the overflow crowd attending an RSNA 2018 session — imaging informatics has advanced significantly in the last year, according to two leading experts. 
At the packed session, Charles E. Kahn Jr., MD, editor of the new RSNA online journal, Radiology: Artificial Intelligence, and deputy editor, William Hsu, PhD, shared some of the most significant studies on informatics published in scientific journals in the last year. 
One study highlighting the use of deep learning (DL) for the reconstruction of images from MRI described how the method — automated transform by manifold approximation — or AUTOMAP, could improve on the performance of existing acquisition methods. 
“This is a unified framework for image reconstruction that exploits the network’s inherent ability to compensate for noise and other perturbations and it really goes beyond MRI reconstruction,” said Dr. Hsu, an associate professor of radiology at the University of California in Los Angeles (UCLA). “There’s a lot of interest in applying similar approaches to reconstruct CT images.”
Dr. Hsu also shared results from studies on machine learning (ML) models for the annotation of radiology reports and the use of algorithms to reduce errors due to reader variability that further underscore the great potential of artificial intelligence (AI).
“Can AI add value to radiology?” Dr. Hsu asked. “I think most of us would agree it can. We can enhance diagnostic accuracy, optimize worklists, perform initial analyses of cases in high-volume applications impacted by observer fatigue, extract information from images that are not apparent to the naked eye and improve the quality of reconstruction.”
Still, significant challenges remain, including a shortage of quality data, according to Dr. Kahn, professor and vice chair of radiology, at the University of Pennsylvania’s Perelman School of Medicine in Philadelphia.
“Most people who have done work in this area have discovered that about 70 to 80 percent of the work that you do is not building the model or testing it,” he said. “It’s curating, cleaning and massaging the data to get it into shape.”
Recent studies have shown the potential for DL to address this dearth of quality of data. A study shared by Dr. Kahn looked at the potential of institutions to distribute DL models rather than patient data, an approach that would lessen the need for the labor-intensive work of de-identifying images.
“That would solve for many of us the problems that we face in terms of building something like ImageNet with data from each of our institutions,” Dr. Kahn said.

Radiology Study on Training Algorithms

New research reached eye-opening conclu­sions about the optimal number of images needed to train an algorithm. A Novem­ber 2018 study in Radiology that looked at the automated classification of chest radiographs found that the DL model’s accuracy improved significantly when the number of images used to train the algorithm jumped from 2,000 to 20,000. However, accuracy improved only marginally when the number of training images increased from 20,000 to 200,000.

“That’s actually a useful thing, that maybe we don’t need to have millions of images in order to train the system,” Dr. Kahn said. “Maybe having a modest number would be a good start, along with other approaches that you could perhaps superimpose on top of that.”

As for mining the data itself, Dr. Kahn pointed to Natural Language Processing (NLP) as a promising avenue of research. NLP is the overarching term used to describe the process of using of computer algorithms to identify key elements in everyday language and extract meaning from unstructured spoken or written input.

“NLP is using various systems to help mine data out of electronic health records,” he said. “Most of the informa­tion in electronic health records is text, and a lot of the resultant information is in the form of narrative text.”