AI Uses Body CT to Classify Multiple Diseases in Different Organ Systems
Algorithm could relieve radiologists of searching for disease in areas of low suspicion
An automated system that uses AI can classify multiple diseases in different organ systems on body CT, potentially improving radiologist workflow and performance, according to new research. Advances in CT technology have added significantly to radiology workloads. The information contained in a CT scan has become increasingly complex, and new technologies are poised to add even more to the radiology workload at a time when the profession is struggling with staff shortages and burnout.
“Manufacturers are coming out with very innovative advances that allow for the scanners to generate deeper, larger and higher-resolution datasets in shorter periods of time,” said Geoffrey D. Rubin, MD, chair and professor of medical imaging at the University of Arizona College of Medicine in Tucson. “This has challenged radiologists.”
The trends have raised the need for automated systems that can quickly and automatically assess CT scans for signs of disease.
Creating an Algorithm Using Natural Language Processing in Radiology Reports
Dr. Rubin began working on such a system during his tenure at Duke University in Durham, NC, where he was professor and chair of radiology. In collaboration with medical physicist Joseph Y. Lo, PhD, they developed a computer-aided triage tool that searches body CT images and identifies organ systems at high suspicion for actionable disease. By finding abnormalities anywhere, the tool is intended to help focus a reader’s attention on abnormal regions and away from areas the machine is confident are normal.
“It’s a lot for radiologists to comb through millions of voxels within these datasets,” Dr. Rubin said. “We seek to help direct their search and give radiologists confidence that they needn’t search in areas where the model says they don’t need to search and focus their attention on targeted areas where they do need to search.”
Rather than rely on strongly supervised AI, a time-consuming approach that requires manual annotation of images by a radiologist, Drs. Rubin and Lo used weak supervision with natural language processing to take advantage of the rich sources of information in radiology text reports.
“Through the magic of deep learning using convolutional neural networks, image features are discovered and extracted by the model to differentiate what is in a dataset that makes it have a label of, say, lung cancer,” Dr. Rubin said. “It’s much easier to bulk label hundreds of thousands of CT scans in that weak manner.”“We’re giving the algorithm many, many more data points, so that even if each case has weak information, the AI can learn from that and come up with the true pattern,” Dr. Lo added.
Study Looks for Disease in Organ Systems
In a study published online in Radiology: Artificial Intelligence, Drs. Rubin and Lo used the algorithm for automatic extraction of disease labels from more than 12,000 patients. Those weak labels were used to create deep learning models to classify multiple diseases for three different organ systems from body CT scans, including the lungs and pleura, liver and gallbladder, and kidneys and ureters. Rule-based algorithms were used to extract almost 20,000 disease labels from 13,667 body CT scans.
“A study of this size would have been impossible to do the old-fashioned way with manual annotation,” Dr. Lo said.
A 3D convolutional neural network classified each case as either no apparent disease or for the presence of four common diseases, for a total of 15 labels.
The labels extracted by the rule-based algorithm were 91% to 99% accurate by manual validation.
"It’s a lot for radiologists to comb through millions of voxels within these datasets. We seek to help direct their search and give radiologists confidence that they needn’t search in areas where the model says they don’t need to search and focus their attention on targeted areas where they do need to search.”
Geoffrey D. Rubin, MD
Potential for Improved Radiology Workflow
The AI-based approach has the potential to improve workflow while maintaining and even improving diagnostic performance, according to the researchers. Triaging the CT scans would enable the radiologist to start the day with the most serious cases first and save the routine ones for later. Research has shown that this might improve performance, Dr. Lo said, because the radiologist is able to prioritize the cases requiring the greatest focus.
“It’s a win-win situation that may help address the huge burnout issues plaguing our field while potentially improving performance,” he said.
The researchers plan to continue developing the algorithm and scale it up to look at more diseases in more organs. Dr. Rubin’s move from Duke to the University of Arizona and its clinical partner Banner Health means they may now leverage two large, diverse populations to build a more robust product.
Eventually they hope to draw from up to a million cases, with half coming from Duke and half from Banner Health. This will allow them to reduce biases and assess the algorithm’s performance across different CT technologies and different vendors.
“Between our two big health systems, we can cover all of that and be more confident that the algorithm performance will be able to generalize,” Dr. Lo said. “We’ve shown some good initial promise and if we keep working at it, improving both the performance and the broad applicability, then that’s where it starts getting really interesting clinically.”
Access the Radiology study, “Classification of Multiple Diseases on Body CT Scans Using Weakly Supervised Deep Learning.”
Read previous RSNA News stories related to AI: