NOTICE:
Site maintenance is scheduled from 4 p.m. CST on Thursday, March 19, 2026, through approximately 2 p.m. CST on Friday, March 20, 2026.

During this time, visitors will be unable to log in, submit abstracts, register for events, access journals or process transactions. We appreciate your patience as we complete this work.

OK

Identifying Early Neurodevelopmental Impairment Using Cranial US

Deep learning models used to enable early identification of infants at risk


Tahani Ahmad, MD
Ahmad
RE Foundation

Despite improved survival of very preterm infants (VPI), born before 31 weeks of gestation, neurodevelopmental impairment (NDI), including cerebral palsy, deafness, blindness, and language or cognitive delay, remains high.

“Predicting neurodevelopmental outcomes in very preterm infants is challenging, in part because current assessments evolve slowly over time,” said Tahani Ahmad, MD, full professor in the Department of Radiology at Dalhousie University in Halifax, Canada.  “Most predictive measures are performed as a child grows, with a definitive diagnosis of neurodevelopmental impairment often not established until around three years of age. By that point, opportunities for the earliest interventions may have already been missed.”

Although cranial US (CUS) is routinely performed during the neonatal period, its prognostic potential has been underutilized. As part of her research for her R&E Foundation Philips/RSNA Research Seed Grant, Dr. Ahmad and team aimed to use CUS images, obtained during the neonatal period, to generate timely predictions of neurodevelopmental outcomes.

By shifting prediction to much earlier in an infant’s life, this approach would enable earlier clinical decision-making, closer monitoring and more timely access to rehabilitation and support services that have been proven to improve the functional outcome and quality of life.

“Most predictive measures are performed as a child grows, with a definitive diagnosis of neurodevelopmental impairment often not established until around three years of age. By that point, opportunities for the earliest interventions may have already been missed.”

— TAHANI AHMAD, MD

Moving Past One-Size-Fits-All Prediction

CUS is used as a screening tool for brain injury in infants. While severe abnormalities on CUS are predictors of NDI, the clinical outcome for apparently "normal" scans is variable.

Traditionally, conventional logistic regression is used for predicting NDI, however, it has important limitations when dealing with complex, high-dimensional data such as medical images, where the number of measurable features can far exceed the number of patient cases available to train the model.

To better handle this complexity, the team explored elastic net regression and deep learning to better capture nonlinear relationships and interactions among clinical and imaging variables.

“Deep learning, in particular, allows models to learn meaningful patterns directly from ultrasound images, offering a powerful way to extract information that is difficult to quantify using conventional methods,” said Dr. Ahmad, who is also a pediatric radiologist and neuroradiologist at both the Izaak Walton Killum (IWK) Health Centre in Nova Scotia, Canada and the University of Jordan in Amman. 

This retrospective study encompassed a cohort of VPI born at 22-30 weeks of gestation in Nova Scotia, Canada between 2004 and 2016. Infants with congenital anomalies, chromosomal aberrations, those who received palliative care, or those missing CUS were excluded.

Clinical data was retrieved from the Nova Scotia Provincial Perinatal Follow-Up Program

(PFUP) database, which prospectively collects information on maternal, perinatal and neonatal status, illnesses and treatments. Long-term neurodevelopmental outcomes at 36 months of corrected age, including cerebral palsy, cognitive or language delay, blindness and deafness, were also collected.

Radiological data was obtained from PACS, where routine CUS were performed on all VPI admitted to the neonatal intensive care unit at three time points: within the first week after birth, at six weeks of chronological age and at or near-term equivalent age (36-40 weeks postmenstrual age).

Three different AI models were developed to investigate the prediction of NDI in VPI. 

Black and white image of a newborn baby with an ultrasound wand being used to image its head

Turning Routine US Data into Predictive Insight

The first model focused exclusively on CUS images to automate the detection of abnormal findings in VPI. Sequential coronal images were labeled by a pediatric radiologist as normal or abnormal.

Researchers then trained several deep learning convolutional neural networks (CNN) to classify the images, with a model known as EfficientNetB0 performing best. Over time, the model became good at spotting brain abnormalities automatically and could even flag when it was confident or uncertain about a result. According to Dr. Ahmad, this model can help standardize and speed up US interpretation.

The second model combined CUS images with prenatal, perinatal and neonatal clinical variables, such as pregnancy, birth and newborn data. Image features and clinical predictors were processed separately and then fused for outcome prediction in patients at age three.

This integrated approach outperformed models based on clinical data alone, with the strongest predictive performance seen in anterior coronal images at six weeks.

The third model reflected routine clinical practice by using only clinical variables and radiology report–based CUS findings, without image analysis. Machine-learning methods, particularly random forests, demonstrated improved predictive performance over traditional logistic regression for NDI at three years.

“What we found was that combining clinical data with cranial ultrasound images substantially improved predictive performance. Models that integrated both data types consistently outperformed those based on clinical information alone,” Dr. Ahmad said.

In addition to the prognostic model, the research produced an unplanned but valuable output: a separate diagnostic model based solely on CUS images.

“Ultrasound images acquired at six weeks of age were especially informative, demonstrating that routinely obtained imaging contains valuable prognostic information when analyzed using AI-based methods,” Dr. Ahmad said.

She emphasized that combining diagnostic and prognostic approaches strengthens the overall contribution of this research.

First, the diagnostic model analyzes CUS images to identify abnormalities that may require further evaluation or intervention.

Then, the prognostic model uses these findings to estimate the likelihood of NDI at three years of age, supporting earlier and more targeted clinical decisions and interventions.

Additional Research and Validation Needed

These findings highlight the potential of AI to enhance early risk stratification in VPI using data already collected in routine care.

In the future, AI-enhanced US analysis could support earlier identification of high-risk infants, help guide follow-up and intervention strategies and prioritize access to rehabilitation services, optimizing health care resources and maximizing clinical benefit.

Dr. Ahmad noted that before clinical adoption, external validation in diverse populations and seamless integration into clinical workflows will be essential. Once validated, Dr. Ahmad and her team aim to develop a user-friendly application that would allow neonatologists to upload CUS images and receive timely, clinically meaningful risk stratification at the point of care.

The Philips/RSNA Research Seed Grant was critical to the success of this project.

It enabled the development of a large, well-curated CUS and clinical dataset, supported advanced AI modeling and fostered interdisciplinary collaboration.

The grant directly led to multiple peer-reviewed publications and national and international presentations and laid the groundwork for future multi-center studies, Dr. Ahmad noted, reinforcing its lasting impact on both patient care and career development.

“Importantly, the funding supported research in a field where there is significant scarcity of data and limited prior work—a space in which other granting agencies may be hesitant to invest,” Dr. Ahmad concluded. “It also played a key role in advancing our teams’ academic careers and in building institutional capacity and expertise at both the university and health care levels.”

For More Information

Learn more about R&E Foundation funding opportunities.

Read previous RSNA News stories on deep learning: