RSNA Intracranial Hemorrhage AI Challenge Now Open

Submissions due Nov. 11 and winners to be recognized at RSNA 2019


Registration is open for the third annual RSNA artificial intelligence (AI) challenge: Intracranial Hemorrhage Detection and Classification Challenge.

This year, researchers are working to develop algorithms that can identify and classify subtypes of hemorrhages on head CT scans. The data set, which comprises more than 25,000 head CT scans contributed by several research institutions, is the first multiplanar dataset used in an RSNA AI Challenge.

The AI Challenge is a competition among researchers to create applications that perform a defined task according to specified performance measures. Last year’s pneumonia detection challenge had more than 1,400 teams.

“The goal of an AI challenge is to explore and demonstrate the ways AI can benefit radiology and improve clinical diagnostics,” said Luciano Prevedello, MD, MPH, chair of the Machine Learning Steering Subcommittee of the RSNA Radiology Informatics Committee. “By organizing these data challenges, RSNA plays a critical role in demonstrating the capabilities of machine learning and fostering the development of AI in improving patient care.”

The RSNA Machine Learning Steering Subcommittee worked with volunteer specialists from the American Society of Neuroradiology (ASNR) to label the CT scans for the presence of five subtypes of intracranial hemorrhage — an effort of unprecedented scope in the radiology community.

The challenge is being run on a platform provided by Kaggle, Inc. (a subsidiary of Alphabet, Inc., also the parent company of Google). Kaggle has recognized the RSNA Intracranial Hemorrhage Detection and Classification Challenge as a public good and will award $25,000 to the winning entries.

On Sept. 3, the first wave of data was released to researchers who are working to develop and “train” algorithms. The training phase runs through Nov. 4. During this phase, participants will use a training dataset that includes the radiologists’ labels to develop algorithms that replicate those annotations.

During the evaluation phase, from Nov. 4 to Nov. 11, participants will apply their algorithms to the testing portion of the dataset, which is provided to them with the annotations withheld.

Their results will then be compared to the annotations on the testing dataset, and an evaluation metric will be applied to rate their accuracy and determine the winners.

Results will be announced in November and the top submissions will be recognized in the AI Showcase Theater during the RSNA annual meeting, Dec. 1-6, McCormick Place, Chicago.

For More Information

To learn more about the 2019 challenge and previous year’s challenges, visit