Warning! OUTDATED BROWSER DETECTED!   Please update your browser immediately for a better experience on this website. Learn More
21/xsl/MobileMenu.xsltmobileNave880e1541/WorkArea//http://www.rsna.org/TwoColumnWireframe.aspx?pageid=24062&ekfxmen_noscript=1&ekfxmensel=falsefalsetruetruetruefalsefalse10-18.0.0.0730truefalse
  •  
     
    • Science
    • QIBA Newsletter

      QIBA Newsletter May 2018 • Volume 10, Number 2: The Value of QI in Radiology 

       

      In This Issue:
       
      IN MY OPINION 
       

       


       
      ANALYSIS TOOLS & TECHNIQUES
       

       

       
      QIBA IN THE LITERATURE   
      QIBA MISSION
       
      Improve the value and practicality of quantitative imaging biomarkers by reducing variability across devices, sites, patients and time.
       
      QIBA CONNECTIONS
       
      QIBA Wiki
       
      QIBA CONTACT
       
      Contact Us
       
      Edward F. Jackson, PhD
      QIBA Chair 

       


      IN MY OPINION

       

       

      Added Value of Quantitative Imaging  

      By Alexander Guimaraes, MD, PhD

      In the era of precision imaging, decreased variability in image interpretation will be paramount in order to better understand and treat our patients. Over the last generation, there have been dramatic increases in the understanding of disease processes through genetics and serum analytics, as well as imaging. 

      As diagnostic advances have occurred, however, there has been a lag in the ability to integrate these advances in “omic” analysis into quotidian radiologic workflow, and therefore not as dramatic an effect in diagnostic imaging specificity. Radiology continues to remain a subjective science that is observational, relating patterns to specific diagnoses. Although quantitative imaging is routinely used in many circumstances in order to assess disease, with salient examples such as treatment response measures using RECIST and PERCIST1–3 obtained from cross sectional imaging modalities like CT, MRI and PET, and renal artery stenosis using ultrasound4, the unstudied variability in these values has produced results that are often skeptically received in terms of absolute confidence across platforms and patient cohorts.  Although we make measurements routinely in daily radiologic evaluation, we do not routinely utilize these measures for a more complex and specific assessment of response. 

      Reasons for this lack of use of quantitative measures in the daily radiologic workflow include the lack of training in residency, lack of universally adopted tools for the radiologist at the PACS workstation and unknown (and sometimes substantive) variance in these values. In addition, there are increasing demands on the radiologist with an ever-increasing number of studies concomitant with increasing demands on turnaround times.  With this obvious disconnect, radiologists, if they are to maintain or increase their value, are at a critical crossroad and at a unique opportunity to modify their practice and integrate deep/machine learning concomitant with standardized, quantitative imaging and molecular imaging techniques in order to transform their workflow to produce high value, precise imaging that allows for more specific diagnosis and assessment of response.

      As imaging advances more towards becoming an assay, strong consideration must be focused on the precision, accuracy, and validity of quantitative imaging biomarkers in order to understand the added “value” of these measures. 5 A better understanding of the clinical applicability of an imaging biomarker is appropriate in the same sense that a serum measure that does not specify or correlate to a disease process is meaningless and not an appropriate indicator of the specific disease process.  Furthermore, in order for imaging to become more of an assay, the true reliability or variance of each of these measures is important.  Assessment of variability strives to evaluate each of the predicted major sources of variability against some criteria that depend on the application. The import of this is clear when deciding if a change in an imaging biomarker demonstrates a meaningful change, or if a specific value of a biomarker correlates to disease or is in fact meaningful. The goal of QIBA is to dissect each of the areas of variability in producing a quantitative imaging biomarker, including the data acquisition, processing, display and analysis in order to make agnostic all aspects of image acquisition and analysis and allow for confident utilization of each imaging biomarker.  6, 7 

      By dissecting each factor in the data stream for the production of SUV measures, for example, QIBA Profiles provide a recipe for the accurate dissemination of a commonly used FDG-PET/CT quantitative imaging biomarker in routine clinical management of disease. Currently, there are 20 QIBA Profiles in various stages of development, but success engenders momentum with the various modalities being studied, allowing for imaging to become an assay and quantitative imaging biomarkers to become more routinely used in precision imaging.

      Alexander R. Guimarães   

       Alexander R. Guimaraes, MD, PhD, is an Associate Professor of Radiology and Section Chief of Abdominal Imaging at OHSU.  Previously, he was Medical Director at the Martinos Center for Biomedical Imaging, Boston at Massachusetts General Hospital. Dr. Guimaraes’ research interests include developing novel imaging technologies that attempt to quantify the effects of unique oncologic therapeutic strategies on the tumor microenvironment. Dr. Guimaraes is Vice-chair of QIBA.      

            

      References:

      1. Eisenhauer EA, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer 2009; 45, 228-247.
      2. Skougaard K, et al. Comparison of EORTC criteria and PERCIST for PET/CT response evaluation of patients with metastatic colorectal cancer treated with irinotecan and cetuximab. J Nucl Med 2013; 54, 1026-1031.
      3. Wahl RL, et al. From RECIST to PERCIST: Evolving Considerations for PET response criteria in solid tumors. J Nucl Med 2009; 50 Suppl 1, 122S-50S.
      4. Granata A, et al. Doppler ultrasound and renal artery stenosis: An overview. J Ultrasound 2009; 12, 133-143.
      5. Rosenkrantz AB, et al. Clinical utility of quantitative imaging. Acad Radiol 2015; 22, 33-49.
      6. Buckler AJ, et al. Quantitative imaging test approval and biomarker qualification: interrelated but distinct activities. Radiology 2011; 259, 875-884.
      7. Sullivan DC. Imaging as a quantitative science. Radiology 2008; 248, 328-332.

       

      Back to Top

       

       

      ANALYSIS TOOLS & TECHNIQUES

       

      DRO Applications in fMRI 

      By James T. Voyvodic, PhD

      QIBA’s fMRI Biomarker Committee has developed its first Profile for performing diagnostic imaging so that a brain fMRI map provides a reliable biomarker for locations of brain function. In our groundwork for preparing the Profile, we found that data analysis methods and subject-dependent sources of variance (SOVs), including head motion, task performance, and tissue pathology seemed to account for most scan-to-scan variability in fMRI results.

      Because the parameter space for each SOV is enormous, we developed dynamic digital reference objects (DROs) to better understand how these variables affect fMRI results. With help from two rounds of QIBA funding, we created fMRI DROs based on empirical imaging data, to which we added known patterns and amounts of dynamic brain activity. Our first DROs simply involved creating 10 simulated realistic fMRI exams (a high-res T1-weighted scan plus 2 T2*-weighted fMRI task time series), and then having eight different institutions download the DROs and generate fMRI maps. Despite processing identical images, the eight sites’ maps differed significantly (Fig 1A). The major difference was in the spatial extent of active brain areas due to differences in thresholding methods. Applying the threshold normalization algorithm recommended in our Profile, however, greatly reduced the inter-site variability in spatial extent of activation (Fig 1B), thus validating that component of the Profile. This DRO study also revealed significant and unexpected variability in the anatomical location of active brain areas due to differences in how T2* and T1 images were aligned at each site, which has prompted a reevaluation of how to standardize the image registration portion of our Profile.

      DROs were also generated to simulate variability in task performance by modulating DRO brain activity using empirical task-dependent waveforms extracted from hundreds of different patient scans. The goal was to identify objective imaging metrics that could be used to distinguish good scans from bad. Figure 2 shows ROC curves for 400 DROs that differed only in their task performance modulating waveforms, along with a plot of ROC area as a function of a novel task-consistency metric. Using standard thresholding methods, the ROC results represented a continuum (Fig 2A,B), but after threshold normalization, the ROC curves separated into a bimodal distribution in which consistency metric values > 0.5 nicely identified the good data sets. We are now using a similar approach to create and test DROs using empirical patterns of head motion to identify motion metrics to address the still unanswered question of how much motion is too much?

      Overall, we see DROs playing two important roles in profile development. So far we have focused on creating thousands of DROs to understand the impact of SOVs that affect fMRI images in order to minimize the impact and to establish qualifiers for identifying acceptable data. The second role will be to create smaller sets of DROs with known properties and make them available on the QIDW so that the different actors involved in fMRI can use them for testing whether their tools and procedures conform to our QIBA Profiles.

       

       

       Figure 1: DRO Analyzed  

       

      Figure 1: Same DRO analyzed at different sites using similar methods; each row is result from a different site. A) Using site-standard thresholds; B) after threshold normalization.

       

       

       Figure 2: ROC Results  

       

      Figure 2: ROC results for 400 DROs differing in task performance waveforms. A) ROC curves based on standard thresholds, B) ROC areas in A as a function of consistency index metric, C) ROC curves based on normalized thresholds, D) ROC areas in C versus consistency index.

      James T. Voyvodic

      James T. Voyvodic, PhD, is an associate professor of radiology and neurobiology and technical director of clinical fMR at Duke University Medical Center, Durham, N.C. He leads the clinical fMRI research effort aimed at improving sensitivity, specificity, and reproducibility of diagnostic fMRI with particular interest in developing improved algorithms for real-time image analysis, quantitative imaging, and data quality assessment. He is a member of QIBA’s fMRI Biomarker Committee, Metrology Working Group, and MR Coordinating Committee.

       

      Back to Top

       

       

       

       

       

      Overview of the DRO-DSC Generation of Tools and Applications  

      By Panagiotis Korfiatis, PhD, and Bradley J. Erickson, MD, PhD

      A Digital Reference Object (DRO) refers to a data set meant to simulate some phenomenon and can be useful in cases where live and phantom data is difficult to obtain. Instead, it is created based on models of how an imaging device is thought to work. Since a DRO is created using a model of both the imaging device and the object being ‘imaged,’ there are multiple parameters that can be adjusted when creating a DRO.

      Dynamic Susceptibility Contrast (DSC) imaging is an MRI method usually used for MRI imaging of the brain, for the purpose of measuring perfusion. This technique requires a bolus injection of contrast material and relies on susceptibility changes when the bolus traverses the brain vasculature. In particular, the signal intensity decreases proportionally to the amount of contrast material present in the vessels.

      While the above describes the theory, the realities of human physiology as well as MRI physics result in cases where the assumptions described above fail. Therefore, we must use software to determine if artifacts are present and try to accurately estimate perfusion despite such artifacts. Some of the common artifacts present in DSC images are susceptibility artifacts NOT due to inflow of gadolinium, leakage of gadolinium from the intravascular space into brain tissue, noise and several others.

      Despite these challenges, DSC perfusion is widely used, particularly for diagnosis and treatment assessment of brain tumors. For this reason, quantitative assessment of perfusion of brain tissue is an important tool for interpretation of brain imaging. Because of the complexity of the artifacts present, and the rather low signal-to-noise ratio in the images, the processing software makes many important assumptions when creating the cerebral blood flow or blood volume images. Understanding how various image properties and artifacts might impact the values present in these post-processed images is critical.

      There are at least three publicly available tools for creating images that simulate the DSC perfusion process, allowing one to create complete 4D data sets that are similar to what an actual MRI might produce in a brain tumors patient. Each of the models takes a slightly different approach to modeling the process, and each has strengths and weaknesses.

      The BNI model (Semmineh et al, 2017) allows for selection of parameters such as field strength, flip angle, repetition time and echo time. This model also enables for simulation of the dosing scheme effect. Both the Mayo model (Korfiatis et al, 2016) and the MGH model (Wu et al, 2003) allow for noise and the residue function shape modeling. In addition, the Mayo model enables tumor leakage simulation.

      Accessing the DROs

      Anyone may access the tool to create the DSC-DROs via the QIBA webpage. Starting at the QIBA home page, click the link for the QIDW and then the DSC-DRO web page (See Figure 1).

      At the DSC-DRO home page (See Figure 2) you can choose one of three models and then select the specific acquisition parameters and biologic properties that you wish to simulate. Note that these options are all drop-down menus and that the dropdown arrow is at the far right.

      Once you have selected the parameters, you then provide an output file name (Figure 3). At that point, you hit the ‘run’ button and the computation begins. This typically requires several minutes. You will see a message with the start time. If other jobs (e.g. from other users or other jobs you have submitted) are being worked on, it may take longer. Once the job actually begins execution, that is noted, and another message is shown when the job has completed. Once the job is complete (Figure 4), the file is available for download. You download it by clicking on it, and that should cause the download to begin.

      Digital reference objects can be useful tools to assess various assumptions about an imaging device, patient physiology, and the software used to process images. The DSC-DRO website provides an easy way to produce DSC-DROs that may be useful for further investigation and understanding of the performance of DSC imaging.

       
       Figure 1: QIDW Website  

       

      Figure 1: Portion of the QIBA QIDW website showing the link to the DSC DRO Modeling Website. The web address is https://www.rsna.org/QIDW/  

       Figure 2: DSC DRO Website  

       

      Figure 2: The main web page for the DSC DRO. This provides a brief description of the DSC models, and allows the user to select the model they wish to use.

       Figure 3: BNI DSC Options  

       

      Figure 3: This shows how to select the specific options/assumptions for the BNI DSC model. The ‘Options’ dropdown at the far right allows the user to select different values. Once you are satisfied with these options, and have provided an output filename (or agree to use ‘output.zip’), click ‘Submit’, and the computation will begin. You will be notified when the computation is complete and the file is ready for download. This may require several minutes, depending on the model selected, and any other users whose request may be in the queue.

       Figure 4: DSC DRO Download  

       

      Figure 4: When the job completes (this example required 3:27 to complete), simply click on the name of the file you provided (‘output’ in this case), and it will begin to download to your computer

      Panagiotis Korfiatis   

       Panagiotis Korfiatis, PhD, is a research associate in the Department of Radiology at the Mayo Clinic in Rochester, MN. He joined Mayo Clinic in 2012. Dr. Panagiotis earned a bachelors in physics followed by his MSc and a PhD in medical physics from the University of Patras, Greece.      

             
      Bradley Erickson   

       Bradley Erickson, MD, PhD, is the Associate Chair for Research in the Department of Radiology at the Mayo Clinic in Rochester MN, where his research interests include neuroradiology and imaging informatics.      

             

      References:

      1. Korfiatis, Panagiotis, et al. 2016. “Dynamic Susceptibility Contrast-MRI Quantification Software Tool: Development and Evaluation.” Tomography: A Journal for Imaging Research 2 (4): 448–56.
      2. Semmineh, Natenael B., et al. 2017. “A Population-Based Digital Reference Object (DRO) for Optimizing Dynamic Susceptibility Contrast (DSC)-MRI Methods for Clinical Trials.” Tomography: A Journal for Imaging Research 3 (1): 41–49.
      3. Wu, Ona, et al. 2003. “Tracer Arrival Timing-Insensitive Technique for Estimating Flow in MR Perfusion-Weighted Imaging Using Singular Value Decomposition with a Block-Circulant Deconvolution Matrix.” Magnetic Resonance in Medicine: Official Journal of the Society of Magnetic Resonance in Medicine/Society of Magnetic Resonance in Medicine 50 (1): 164–74.

       


      Back to Top

       

      QIBA Activities 
      QIBA Biomarker Committees Open to All Interested Persons 

      Meeting summaries, the QIBA Newsletter and other documents are available on the QIBA website RSNA.ORG/QIBA and wiki http://qibawiki.rsna.org/.  Please contact QIBA@rsna.org for more information.  

      Back to Top

       

      QIBA Resources: 

      Please contact QIBA@rsna.org for more information. We welcome your participation. 

      Back to Top

       

      This list of references showcases articles that mention QIBA, quantitative imaging, or quantitative imaging biomarkers. In most cases, these are articles published by QIBA members or relate to a research project undertaken by QIBA members that may have received special recognition. New submissions are welcome and may be directed to QIBA@rsna.org